Search results for: data augmentation
25154 Introducing Global Navigation Satellite System Capabilities into IoT Field-Sensing Infrastructures for Advanced Precision Agriculture Services
Authors: Savvas Rogotis, Nikolaos Kalatzis, Stergios Dimou-Sakellariou, Nikolaos Marianos
Abstract:
As precision holds the key for the introduction of distinct benefits in agriculture (e.g., energy savings, reduced labor costs, optimal application of inputs, improved products, and yields), it steadily becomes evident that new initiatives should focus on rendering Precision Agriculture (PA) more accessible to the average farmer. PA leverages on technologies such as the Internet of Things (IoT), earth observation, robotics and positioning systems (e.g., the Global Navigation Satellite System – GNSS - as well as individual positioning systems like GPS, Glonass, Galileo) that allow: from simple data georeferencing to optimal navigation of agricultural machinery to even more complex tasks like Variable Rate Applications. An identified customer pain point is that, from one hand, typical triangulation-based positioning systems are not accurate enough (with errors up to several meters), while on the other hand, high precision positioning systems reaching centimeter-level accuracy, are very costly (up to thousands of euros). Within this paper, a Ground-Based Augmentation System (GBAS) is introduced, that can be adapted to any existing IoT field-sensing station infrastructure. The latter should cover a minimum set of requirements, and in particular, each station should operate as a fixed, obstruction-free towards the sky, energy supplying unit. Station augmentation will allow them to function in pairs with GNSS rovers following the differential GNSS base-rover paradigm. This constitutes a key innovation element for the proposed solution that encompasses differential GNSS capabilities into an IoT field-sensing infrastructure. Integrating this kind of information supports the provision of several additional PA beneficial services such as spatial mapping, route planning, and automatic field navigation of unmanned vehicles (UVs). Right at the heart of the designed system, there is a high-end GNSS toolkit with base-rover variants and Real-Time Kinematic (RTK) capabilities. The GNSS toolkit had to tackle all availability, performance, interfacing, and energy-related challenges that are faced for a real-time, low-power, and reliable in the field operation. Specifically, in terms of performance, preliminary findings exhibit a high rover positioning precision that can even reach less than 10-centimeters. As this precision is propagated to the full dataset collection, it enables tractors, UVs, Android-powered devices, and measuring units to deal with challenging real-world scenarios. The system is validated with the help of Gaiatrons, a mature network of agro-climatic telemetry stations with presence all over Greece and beyond ( > 60.000ha of agricultural land covered) that constitutes part of “gaiasense” (www.gaiasense.gr) smart farming (SF) solution. Gaiatrons constantly monitor atmospheric and soil parameters, thus, providing exact fit to operational requirements asked from modern SF infrastructures. Gaiatrons are ultra-low-cost, compact, and energy-autonomous stations with a modular design that enables the integration of advanced GNSS base station capabilities on top of them. A set of demanding pilot demonstrations has been initiated in Stimagka, Greece, an area with a diverse geomorphological landscape where grape cultivation is particularly popular. Pilot demonstrations are in the course of validating the preliminary system findings in its intended environment, tackle all technical challenges, and effectively highlight the added-value offered by the system in action.Keywords: GNSS, GBAS, precision agriculture, RTK, smart farming
Procedia PDF Downloads 11525153 Multi-Source Data Fusion for Urban Comprehensive Management
Authors: Bolin Hua
Abstract:
In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data
Procedia PDF Downloads 39325152 Reviewing Privacy Preserving Distributed Data Mining
Authors: Sajjad Baghernezhad, Saeideh Baghernezhad
Abstract:
Nowadays considering human involved in increasing data development some methods such as data mining to extract science are unavoidable. One of the discussions of data mining is inherent distribution of the data usually the bases creating or receiving such data belong to corporate or non-corporate persons and do not give their information freely to others. Yet there is no guarantee to enable someone to mine special data without entering in the owner’s privacy. Sending data and then gathering them by each vertical or horizontal software depends on the type of their preserving type and also executed to improve data privacy. In this study it was attempted to compare comprehensively preserving data methods; also general methods such as random data, coding and strong and weak points of each one are examined.Keywords: data mining, distributed data mining, privacy protection, privacy preserving
Procedia PDF Downloads 52525151 The Right to Data Portability and Its Influence on the Development of Digital Services
Authors: Roman Bieda
Abstract:
The General Data Protection Regulation (GDPR) will come into force on 25 May 2018 which will create a new legal framework for the protection of personal data in the European Union. Article 20 of GDPR introduces a right to data portability. This right allows for data subjects to receive the personal data which they have provided to a data controller, in a structured, commonly used and machine-readable format, and to transmit this data to another data controller. The right to data portability, by facilitating transferring personal data between IT environments (e.g.: applications), will also facilitate changing the provider of services (e.g. changing a bank or a cloud computing service provider). Therefore, it will contribute to the development of competition and the digital market. The aim of this paper is to discuss the right to data portability and its influence on the development of new digital services.Keywords: data portability, digital market, GDPR, personal data
Procedia PDF Downloads 47325150 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking
Authors: Noga Bregman
Abstract:
Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves
Procedia PDF Downloads 5525149 Recent Advances in Data Warehouse
Authors: Fahad Hanash Alzahrani
Abstract:
This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing
Procedia PDF Downloads 40425148 How to Use Big Data in Logistics Issues
Authors: Mehmet Akif Aslan, Mehmet Simsek, Eyup Sensoy
Abstract:
Big Data stands for today’s cutting-edge technology. As the technology becomes widespread, so does Data. Utilizing massive data sets enable companies to get competitive advantages over their adversaries. Out of many area of Big Data usage, logistics has significance role in both commercial sector and military. This paper lays out what big data is and how it is used in both military and commercial logistics.Keywords: big data, logistics, operational efficiency, risk management
Procedia PDF Downloads 64125147 Heat Transfer and Turbulent Fluid Flow over Vertical Double Forward-Facing Step
Authors: Tuqa Abdulrazzaq, Hussein Togun, M. K. A. Ariffin, S. N. Kazi, A. Badarudin, N. M. Adam, S. Masuri
Abstract:
Numerical study of heat transfer and fluid flow over vertical double forward facing step were presented. The k-w model with finite volume method was employed to solve continuity, momentum, and energy equations. Different step heights were adopted for range of Reynolds number varied from 10000 to 40000, and range of temperature varied from 310K to 340 K. The straight side of duct is insulated while the side of double forward facing step is heated. The result shows augmentation of heat transfer due to the recirculation region created after and before steps. Effect of step length and Reynolds number observed on increase of local Nusselt number particularly at recirculation regions. Contour of streamline velocity is plotted to show recirculation regions after and before steps. Numerical simulation in this paper done by used ANSYS Fluent 14.Keywords: turbulent flow, double forward, heat transfer, separation flow
Procedia PDF Downloads 46125146 Numerical Analysis of Solar Cooling System
Authors: Nadia Allouache, Mohamed Belmedani
Abstract:
Energy source is a sustainable, totally inexhaustible and environmentally friendly alternative to the fossil fuels available. It is a renewable and economical energy that can be harnessed sustainably over the long term and thus stabilizes energy costs. Solar cooling technologies have been developed to decrease the augmentation electricity consumption for air conditioning and to displace the peak load during hot summer days. A numerical analysis of thermal and solar performances of an annular finned adsorber, which is the most important component of the adsorption solar refrigerating system, is considered in this work. Different adsorbent/adsorbate pairs, such as activated carbon AC35/methanol, activated carbon AC35/ethanol, and activated carbon BPL/Ammoniac, are undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular finned adsorber. The Wilson and Dubinin- Astakhov models of the solid-adsorbate equilibrium are used to calculate the adsorbed quantity. The porous medium and the fins are contained in the annular space, and the adsorber is heated by solar energy. Effects of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The AC35/methanol pair is the best pair compared to BPL/Ammoniac and AC35/ethanol pairs in terms of system performance. The system performances are sensitive to the fin geometry. For the considered data measured for clear type days of July 2023 in Algeria and Morocco, the performances of the cooling system are very significant in Algeria.Keywords: activated carbon AC35-methanol pair, activated carbon AC35-ethanol pair, activated carbon BPL-ammoniac pair, annular finned adsorber, performance coefficients, numerical analysis, solar cooling system
Procedia PDF Downloads 5525145 Numerical Analysis of Various V- rib Cross-section to Optimize Thermal Performance of the Rocket Engine
Authors: Hisham Elmouazen, Xiaobing Zhang
Abstract:
In regenerative-cooled rocket engines, understanding the coolant behaviour within cooling channels is essential to enhance engine performance and maintain chamber walls at low temperatures. However, modelling and testing the rocket engine's cooling channels is challenging due to the high temperature of the chamber walls, supercritical flow, and high Reynolds number. Therefore, a numerical analysis of five different V-rib cross-sections to optimize rocket engine cooling channels' performance is developed and validated in this work. Three-dimensional CFD simulations are employed by the Shear Stress Transport (k- ω) turbulent model at Reynolds number 42,500. The study findings illustrate that the V-ribbed channel performance is optimized by 59.5% relative to the plain/flat channel. Additionally, the chamber wall temperature is decreased to 726.4 K, and the right-angle trapezoidal V-rib (Case 4) improves thermal augmentation up to 74.3 % with a slightly high friction factor.Keywords: computational fluid dynamics CFD, regenerative-cooled system, thermal performance, V-rib cross-sections
Procedia PDF Downloads 7525144 Implementation of an IoT Sensor Data Collection and Analysis Library
Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee
Abstract:
Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data
Procedia PDF Downloads 37825143 Government (Big) Data Ecosystem: Definition, Classification of Actors, and Their Roles
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Organizations, including governments, generate (big) data that are high in volume, velocity, veracity, and come from a variety of sources. Public Administrations are using (big) data, implementing base registries, and enforcing data sharing within the entire government to deliver (big) data related integrated services, provision of insights to users, and for good governance. Government (Big) data ecosystem actors represent distinct entities that provide data, consume data, manipulate data to offer paid services, and extend data services like data storage, hosting services to other actors. In this research work, we perform a systematic literature review. The key objectives of this paper are to propose a robust definition of government (big) data ecosystem and a classification of government (big) data ecosystem actors and their roles. We showcase a graphical view of actors, roles, and their relationship in the government (big) data ecosystem. We also discuss our research findings. We did not find too much published research articles about the government (big) data ecosystem, including its definition and classification of actors and their roles. Therefore, we lent ideas for the government (big) data ecosystem from numerous areas that include scientific research data, humanitarian data, open government data, industry data, in the literature.Keywords: big data, big data ecosystem, classification of big data actors, big data actors roles, definition of government (big) data ecosystem, data-driven government, eGovernment, gaps in data ecosystems, government (big) data, public administration, systematic literature review
Procedia PDF Downloads 16225142 Numerical Investigation of Heat Transfer in a Channel with Delta Winglet Vortex Generators at Different Reynolds Numbers
Authors: N. K. Singh
Abstract:
In this study the augmentation of heat transfer in a rectangular channel with triangular vortex generators is evaluated. The span wise averaged Nusselt number, mean temperature and total heat flux are compared with and without vortex generators in the channel at a blade angle of 30° for Reynolds numbers 800, 1200, 1600, and 2000. The use of vortex generators increases the span wise averaged Nusselt number compared to the case without vortex generators considerably. At a particular blade angle, increasing the Reynolds number results in an enhancement in the overall performance and span wise averaged Nusselt number was found to be greater at particular location for larger Reynolds number. The total heat flux from the bottom wall with vortex generators was found to be greater than that without vortex generators and the difference increases with increase in Reynolds number.Keywords: heat transfer, channel with vortex generators, numerical simulation, effect of Reynolds number on heat transfer
Procedia PDF Downloads 33125141 Government Big Data Ecosystem: A Systematic Literature Review
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Data that is high in volume, velocity, veracity and comes from a variety of sources is usually generated in all sectors including the government sector. Globally public administrations are pursuing (big) data as new technology and trying to adopt a data-centric architecture for hosting and sharing data. Properly executed, big data and data analytics in the government (big) data ecosystem can be led to data-driven government and have a direct impact on the way policymakers work and citizens interact with governments. In this research paper, we conduct a systematic literature review. The main aims of this paper are to highlight essential aspects of the government (big) data ecosystem and to explore the most critical socio-technical factors that contribute to the successful implementation of government (big) data ecosystem. The essential aspects of government (big) data ecosystem include definition, data types, data lifecycle models, and actors and their roles. We also discuss the potential impact of (big) data in public administration and gaps in the government data ecosystems literature. As this is a new topic, we did not find specific articles on government (big) data ecosystem and therefore focused our research on various relevant areas like humanitarian data, open government data, scientific research data, industry data, etc.Keywords: applications of big data, big data, big data types. big data ecosystem, critical success factors, data-driven government, egovernment, gaps in data ecosystems, government (big) data, literature review, public administration, systematic review
Procedia PDF Downloads 23025140 A Machine Learning Decision Support Framework for Industrial Engineering Purposes
Authors: Anli Du Preez, James Bekker
Abstract:
Data is currently one of the most critical and influential emerging technologies. However, the true potential of data is yet to be exploited since, currently, about 1% of generated data are ever actually analyzed for value creation. There is a data gap where data is not explored due to the lack of data analytics infrastructure and the required data analytics skills. This study developed a decision support framework for data analytics by following Jabareen’s framework development methodology. The study focused on machine learning algorithms, which is a subset of data analytics. The developed framework is designed to assist data analysts with little experience, in choosing the appropriate machine learning algorithm given the purpose of their application.Keywords: Data analytics, Industrial engineering, Machine learning, Value creation
Procedia PDF Downloads 16825139 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm
Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima
Abstract:
In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.Keywords: cloud space, AES, FTP, NetBeans IDE
Procedia PDF Downloads 20625138 Analyzing the Plausible Alternatives in Contracting the Societal Fissure Caused by Digital Divide in Sri Lanka
Authors: Manuela Nayantara Jeyaraj
Abstract:
'Digital Divide' is a concept that has existed in this paradigm ever since the discovery of the first-generation technologies. Before the turn of the century, it was basically used to describe the gap between those with telephone communication access and those without it. At present, it is plainly descriptive in itself to illustrate the cavity among those with Internet access and those without. Though the concept of digital divide has been merely lying in sight for as long as time itself, the friction it caused has not yet been fully realized to solve major crisis situations. Unlike well-developed countries, Sri Lanka is still in the verge of moving farther away from a developing country in the race towards reaching a developed state. Access to technological resources varies from region to region, even within the island itself, with one region having a considerable percentage of its community exposed to the Internet and its related technologies, and the other unaware of such. Thus, this paper intends to analyze the roots for the still-extant gap instigated based on the concept of ‘Digital Divide’ and explores the plausible potentials that could be brought about by narrowing this prevailing percentage among the population, specifically entrenching the advantages reaped towards an economic augmentation and culture or lifestyle revolution on the path towards development.Keywords: communication, digital divide, society, Sri Lanka
Procedia PDF Downloads 23225137 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 48425136 Augmentation of Automatic Selective Door Operation systems with UWB positioning
Authors: John Chan, Jake Linnenbank, Gavin Caird
Abstract:
Automatic Selective Door Operation (ASDO) systems are increasingly used in railways to provide Correct Side Door Enable (CSDE) protection as well as to protect passenger doors opening off the platform where the train is longer than the platform, or in overshoot or undershoot scenarios. Such ASDO systems typically utilise trackside-installed RFID beacons, such as Eurobalises for odometry positioning purposes. Installing such trackside infrastructure may not be desirable or possible due to various factors such as conflict with existing infrastructure, potential damage from track tamping and jurisdiction constraints. Ultra-wideband (UWB) positioning technology could enable ASDO positioning requirements to be met without requiring installation of equipment directly on track since UWB technology can be installed on adjacent infrastructure such as on platforms. This paper will explore the feasibility of upgrading existing ASDO systems with UWB positioning technology, the feasibility of retrofitting UWB-enabled ASDO systems onto unfitted trains, and any other considerations relating to the use of UWB positioning for ASDO applications.Keywords: UWB, ASDO, automatic selective door operations, CSDE, correct side door enable
Procedia PDF Downloads 7725135 Efficient Deep Neural Networks for Real-Time Strawberry Freshness Monitoring: A Transfer Learning Approach
Authors: Mst. Tuhin Akter, Sharun Akter Khushbu, S. M. Shaqib
Abstract:
A real-time system architecture is highly effective for monitoring and detecting various damaged products or fruits that may deteriorate over time or become infected with diseases. Deep learning models have proven to be effective in building such architectures. However, building a deep learning model from scratch is a time-consuming and costly process. A more efficient solution is to utilize deep neural network (DNN) based transfer learning models in the real-time monitoring architecture. This study focuses on using a novel strawberry dataset to develop effective transfer learning models for the proposed real-time monitoring system architecture, specifically for evaluating and detecting strawberry freshness. Several state-of-the-art transfer learning models were employed, and the best performing model was found to be Xception, demonstrating higher performance across evaluation metrics such as accuracy, recall, precision, and F1-score.Keywords: strawberry freshness evaluation, deep neural network, transfer learning, image augmentation
Procedia PDF Downloads 9025134 Prevalence, Median Time, and Associated Factors with the Likelihood of Initial Antidepressant Change: A Cross-Sectional Study
Authors: Nervana Elbakary, Sami Ouanes, Sadaf Riaz, Oraib Abdallah, Islam Mahran, Noriya Al-Khuzaei, Yassin Eltorki
Abstract:
Major Depressive Disorder (MDD) requires therapeutic interventions during the initial month after being diagnosed for better disease outcomes. International guidelines recommend a duration of 4–12 weeks for an initial antidepressant (IAD) trial at an optimized dose to get a response. If depressive symptoms persist after this duration, guidelines recommend switching, augmenting, or combining strategies as the next step. Most patients with MDD in the mental health setting have been labeled incorrectly as treatment-resistant where in fact they have not been subjected to an adequate trial of guideline-recommended therapy. Premature discontinuation of IAD due to ineffectiveness can cause unfavorable consequences. Avoiding irrational practices such as subtherapeutic doses of IAD, premature switching between the ADs, and refraining from unjustified polypharmacy can help the disease to go into a remission phase We aimed to determine the prevalence and the patterns of strategies applied after an IAD was changed because of a suboptimal response as a primary outcome. Secondary outcomes included the median survival time on IAD before any change; and the predictors that were associated with IAD change. This was a retrospective cross- sectional study conducted in Mental Health Services in Qatar. A dataset between January 1, 2018, and December 31, 2019, was extracted from the electronic health records. Inclusion and exclusion criteria were defined and applied. The sample size was calculated to be at least 379 patients. Descriptive statistics were reported as frequencies and percentages, in addition, to mean and standard deviation. The median time of IAD to any change strategy was calculated using survival analysis. Associated predictors were examined using two unadjusted and adjusted cox regression models. A total of 487 patients met the inclusion criteria of the study. The average age for participants was 39.1 ± 12.3 years. Patients with first experience MDD episode 255 (52%) constituted a major part of our sample comparing to the relapse group 206(42%). About 431 (88%) of the patients had an occurrence of IAD change to any strategy before end of the study. Almost half of the sample (212 (49%); 95% CI [44–53%]) had their IAD changed less than or equal to 30 days. Switching was consistently more common than combination or augmentation at any timepoint. The median time to IAD change was 43 days with 95% CI [33.2–52.7]. Five independent variables (age, bothersome side effects, un-optimization of the dose before any change, comorbid anxiety, first onset episode) were significantly associated with the likelihood of IAD change in the unadjusted analysis. The factors statistically associated with higher hazard of IAD change in the adjusted analysis were: younger age, un-optimization of the IAD dose before any change, and comorbid anxiety. Because almost half of the patients in this study changed their IAD as early as within the first month, efforts to avoid treatment failure are needed to ensure patient-treatment targets are met. The findings of this study can have direct clinical guidance for health care professionals since an optimized, evidence-based use of AD medication can improve the clinical outcomes of patients with MDD; and also, to identify high-risk factors that could worsen the survival time on IAD such as young age and comorbid anxietyKeywords: initial antidepressant, dose optimization, major depressive disorder, comorbid anxiety, combination, augmentation, switching, premature discontinuation
Procedia PDF Downloads 15125133 Imputation Technique for Feature Selection in Microarray Data Set
Authors: Younies Saeed Hassan Mahmoud, Mai Mabrouk, Elsayed Sallam
Abstract:
Analysing DNA microarray data sets is a great challenge, which faces the bioinformaticians due to the complication of using statistical and machine learning techniques. The challenge will be doubled if the microarray data sets contain missing data, which happens regularly because these techniques cannot deal with missing data. One of the most important data analysis process on the microarray data set is feature selection. This process finds the most important genes that affect certain disease. In this paper, we introduce a technique for imputing the missing data in microarray data sets while performing feature selection.Keywords: DNA microarray, feature selection, missing data, bioinformatics
Procedia PDF Downloads 57425132 PDDA: Priority-Based, Dynamic Data Aggregation Approach for Sensor-Based Big Data Framework
Authors: Lutful Karim, Mohammed S. Al-kahtani
Abstract:
Sensors are being used in various applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control and hence, play the vital role in the growth of big data. However, sensors collect redundant data. Thus, aggregating and filtering sensors data are significantly important to design an efficient big data framework. Current researches do not focus on aggregating and filtering data at multiple layers of sensor-based big data framework. Thus, this paper introduces (i) three layers data aggregation and framework for big data and (ii) a priority-based, dynamic data aggregation scheme (PDDA) for the lowest layer at sensors. Simulation results show that the PDDA outperforms existing tree and cluster-based data aggregation scheme in terms of overall network energy consumptions and end-to-end data transmission delay.Keywords: big data, clustering, tree topology, data aggregation, sensor networks
Procedia PDF Downloads 34625131 Design and Finite Element Analysis of Clamp Cylinder for Capacity Augmentation of Injection Moulding Machine
Authors: Vimal Jasoliya, Purnank Bhatt, Mit Shah
Abstract:
The Injection Moulding is one of the principle methods of conversions of plastics into various end products using a very wide range of plastics materials from commodity plastics to specialty engineering plastics. Injection Moulding Machines are rated as per the tonnage force applied. The work present includes Design & Finite Element Analysis of a structure component of injection moulding machine i.e. clamp cylinder. The work of the project is to upgrade the 1300T clamp cylinder to 1500T clamp cylinder for injection moulding machine. The design of existing clamp cylinder of 1300T is checked. Finite Element analysis is carried out for 1300T clamp cylinder in ANSYS Workbench, and the stress values are compared with acceptance criteria and theoretical calculation. The relation between the clamp cylinder diameter and the tonnage capacity has been derived and verified for 1300T clamp cylinder. The same correlation is used to find out the thickness for 1500T clamp cylinder. The detailed design of 1500T cylinder is carried out based on calculated thickness.Keywords: clamp cylinder, fatigue analysis, finite element analysis, injection moulding machines
Procedia PDF Downloads 33525130 Optimizing Perennial Plants Image Classification by Fine-Tuning Deep Neural Networks
Authors: Khairani Binti Supyan, Fatimah Khalid, Mas Rina Mustaffa, Azreen Bin Azman, Amirul Azuani Romle
Abstract:
Perennial plant classification plays a significant role in various agricultural and environmental applications, assisting in plant identification, disease detection, and biodiversity monitoring. Nevertheless, attaining high accuracy in perennial plant image classification remains challenging due to the complex variations in plant appearance, the diverse range of environmental conditions under which images are captured, and the inherent variability in image quality stemming from various factors such as lighting conditions, camera settings, and focus. This paper proposes an adaptation approach to optimize perennial plant image classification by fine-tuning the pre-trained DNNs model. This paper explores the efficacy of fine-tuning prevalent architectures, namely VGG16, ResNet50, and InceptionV3, leveraging transfer learning to tailor the models to the specific characteristics of perennial plant datasets. A subset of the MYLPHerbs dataset consisted of 6 perennial plant species of 13481 images under various environmental conditions that were used in the experiments. Different strategies for fine-tuning, including adjusting learning rates, training set sizes, data augmentation, and architectural modifications, were investigated. The experimental outcomes underscore the effectiveness of fine-tuning deep neural networks for perennial plant image classification, with ResNet50 showcasing the highest accuracy of 99.78%. Despite ResNet50's superior performance, both VGG16 and InceptionV3 achieved commendable accuracy of 99.67% and 99.37%, respectively. The overall outcomes reaffirm the robustness of the fine-tuning approach across different deep neural network architectures, offering insights into strategies for optimizing model performance in the domain of perennial plant image classification.Keywords: perennial plants, image classification, deep neural networks, fine-tuning, transfer learning, VGG16, ResNet50, InceptionV3
Procedia PDF Downloads 6625129 The Effect of Aerobic Exercises on the Amount of Urea, Uric Acid and Creatine in Blood of Iranian Soccer Players
Authors: Abdolrasoul Daneshjoo
Abstract:
The purpose of this research was to study the effect of aerobic exercises with 75% heart beats on the amount of urea, uric acid and creatine in blood of Iranian soccer national U-23 players. 27 players were selected according to the following demographic specifications: age: 21.4±1.60 years old; weight: 68±9.4 kg; height: 174.2±8.6 cm. Urea, uric acid and creatine in blood are considered as dependent variations where as 40 minutes running on a track with maximum 75% heart beats are independent variations. Heart beat and blood pressure in rest time, age, height, and weight are considered as the controlled variations. Maximum heart beats are recorded under maximum exercises (8 minutes and 150-250 watt energy) on ergo meter. Then, in order to determine independent variations, 75% maximum heart beats are considered for each player. Blood is taken twice (before and after determining independence variation). Moreover, the players are given a few instructions to be fulfilled 24 hours before the main exercises. Laboratory analysis method for blood urea sample is deacetyl ammoniom, for uric acid Karvy test and for creatine pyric acid. 'T' formula is applied for analyzing statistical data in dependent groups with degree of freedom 7 (d.f=7) urea and uric acid contain P>0.01 and P>0.05 for creatine. 1. Aerobic exercise can effect on the concentration of urea of blood as well as uric acid and creatine in blood serum and increase the amount of them. 2. Urea of blood serum increases from 26.75±2.59 to 28.9±2.67 (25%) with 40 minutes running and 75% heart beat. 3. Aerobic exercise causes uric acid increase 12.5% from 5.7±0.52 (before exercise) to 6.1±0.71 (after exercise). Creatine of blood serum increases from 1.36±0.27 (before exercise) to 1.85±0.49 (after exercise). We came to this result that during aerobic exercise catabolism of protein substrate increases. Moreover, augmentation of urea, uric acid and creatine in blood serum as metabolic poisons causes disorder in kidney. Also, tendons and joints are affected by these poisons. Appropriate diet and exercise can prevent production of these poisons resulted from heavy exercise.Keywords: aerobic exercise, urea, uric acid, creatine, blood, soccer national players
Procedia PDF Downloads 53425128 Dynamic Contrast-Enhanced Breast MRI Examinations: Clinical Use and Technical Challenges
Authors: Janet Wing-Chong Wai, Alex Chiu-Wing Lee, Hailey Hoi-Ching Tsang, Jeffrey Chiu, Kwok-Wing Tang
Abstract:
Background: Mammography has limited sensitivity and specificity though it is the primary imaging technique for detection of early breast cancer. Ultrasound imaging and contrast-enhanced MRI are useful adjunct tools to mammography. The advantage of breast MRI is high sensitivity for invasive breast cancer. Therefore, indications for and use of breast magnetic resonance imaging have increased over the past decade. Objectives: 1. Cases demonstration on different indications for breast MR imaging. 2. To review of the common artifacts and pitfalls in breast MR imaging. Materials and Methods: This is a retrospective study including all patients underwent dynamic contrast-enhanced breast MRI examination in our centre, performed from Jan 2011 to Dec 2017. The clinical data and radiological images were retrieved from the EPR (electronic patient record), RIS (Radiology Information System) and PACS (Picture Archiving and Communication System). Results and Discussion: Cases including (1) Screening of the contralateral breast in patient with a new breast malignancy (2) Breast augmentation with free injection of unknown foreign materials (3) Finding of axillary adenopathy with an unknown site of primary malignancy (4) Neo-adjuvant chemotherapy: before, during, and after chemotherapy to evaluate treatment response and extent of residual disease prior to operation. Relevant images will be included and illustrated in the presentation. As with other types of MR imaging, there are different artifacts and pitfalls that can potentially limit interpretation of the images. Because of the coils and software specific to breast MR imaging, there are some other technical considerations that are unique to MR imaging of breast regions. Case demonstration images will be available in presentation. Conclusion: Breast MR imaging is a highly sensitive and reasonably specific method for the detection of breast cancer. Adherent to appropriate clinical indications and technical optimization are crucial for achieving satisfactory images for interpretation.Keywords: MRI, breast, clinical, cancer
Procedia PDF Downloads 24325127 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography
Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai
Abstract:
Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics
Procedia PDF Downloads 9625126 Smart Card Technology Adaption in a Hospital Setting
Authors: H. K. V. Narayan
Abstract:
This study was conducted at Tata Memorial Hospital (TMH), Mumbai, India. The study was to evaluate the impact of adapting Smart Card (SC) for clinical and business transactions in order to reduce Lead times and to enforce business rules of the hospital. The objective for implementing the Smart Card was to improve the patient perception of quality in terms of structures process and outcomes and also to improve the productivity of the Institution. The Smart Card was implemented in phases from 2011 and integrated with the Hospital Information System (HIS/EMR). The implementation was a learning curve for all the stake holders as software obviated the need to use hardcopies of transactions. The acceptability to the stake holders was challenge in change management. The study assessed the impact 3 years into the implementation and the observed trends have suggested that it has decreased the lead times for services and increased the no of transactions and thereby the productivity. Patients who used to complain of multiple queues and cumbersome transactions now compliment the administration for effective use of Information and Communication Technology.Keywords: smart card, high availability of health care information, reduction in potential medical errors due to elimination of transcription errors, reduction in no of queues, increased transactions, augmentation of revenue
Procedia PDF Downloads 28525125 Online Compressor Washing for Gas Turbine Power Output
Authors: Enyia James Diwa, Isaiah Thank-God Ebi, Dodeye Ina Igbong
Abstract:
The privatization of utilities has brought about very strong competition in industries such as petrochemical and gas distribution among others, considering the continuous increase in cost of fuel. This has brought about the intense reason for gas turbine owners and operators to reduce and control performance degradation of the engine in other to minimize cost. The most common and very crucial problem of the gas turbine is the fouling of compressor, which is mostly caused by a reduction in flow capacity, compressor efficiency, and pressure ratio, this, in turn, lead to the engine compressor re-matching and output power and thermal efficiency reduction. The content of this paper encompasses a detailed presentation of the major causes, effects and control mechanism of fouling. The major emphasis is on compressor water washing to enable power augmentation. A modelled gas turbine similar to that of GE LM6000 is modelled for the current study, based on TURBOMATCH which is a Cranfield University software specifically made for gas turbine performance simulation and fouling detection. The compounded and intricate challenges of compressor online water washing of large output gas turbine are carried out. The treatment is applied to axial compressor used in the petrochemical and hydrocarbon industry.Keywords: gas turbine, fouling, degradation, compressor washing
Procedia PDF Downloads 348