Search results for: cloud CRM
86 Transient Phenomena in a 100 W Hall Thrusters: Experimental Measurements of Discharge Current and Plasma Parameter Evolution
Authors: Clémence Royer, Stéphane Mazouffre
Abstract:
Nowadays, electric propulsion systems play a crucial role in space exploration missions due to their high specific impulse and long operational life. The Hall thrusters are one of the most mature EP technologies. It is a gridless ion thruster that has proved reliable and high-performance for decades in various space missions. Operation of HT relies on electron emissions through a cathode placed outside a hollow dielectric channel that includes an anode at the back. Negatively charged particles are trapped in a magnetic field and efficiently slow down. By collisions, the electron cloud ionizes xenon atoms. A large electric field is generated in the axial direction due to the low electron transverse mobility in the region of a strong magnetic field. Positive particles are pulled out of the chamber at high velocity and are neutralized directly at the exhaust area. This phenomenon leads to the acceleration of the spacecraft system at a high specific impulse. While HT’s architecture and operating principle are relatively simple, the physics behind thrust is complex and still partly unknown. Current and voltage oscillations, as well as electron properties, have been captured over a 30 mn time period after ignition. The observed low-frequency oscillations exhibited specific frequency ranges, amplitudes, and stability patterns. Correlations between the oscillations and plasma characteristics we analyzed. The impact of these instabilities on thruster performance, including thrust efficiency, has been evaluated as well. Moreover, strategies for mitigating and controlling these instabilities have been developed, such as filtering. In this contribution, in addition to presenting a summary of the results obtained in the transient regime, we will present and discuss recent advances in Hall thruster plasma discharge filtering and control.Keywords: electric propulsion, Hall Thruster, plasma diagnostics, low-frequency oscillations
Procedia PDF Downloads 9185 Aerial Survey and 3D Scanning Technology Applied to the Survey of Cultural Heritage of Su-Paiwan, an Aboriginal Settlement, Taiwan
Authors: April Hueimin Lu, Liangj-Ju Yao, Jun-Tin Lin, Susan Siru Liu
Abstract:
This paper discusses the application of aerial survey technology and 3D laser scanning technology in the surveying and mapping work of the settlements and slate houses of the old Taiwanese aborigines. The relics of old Taiwanese aborigines with thousands of history are widely distributed in the deep mountains of Taiwan, with a vast area and inconvenient transportation. When constructing the basic data of cultural assets, it is necessary to apply new technology to carry out efficient and accurate settlement mapping work. In this paper, taking the old Paiwan as an example, the aerial survey of the settlement of about 5 hectares and the 3D laser scanning of a slate house were carried out. The obtained orthophoto image was used as an important basis for drawing the settlement map. This 3D landscape data of topography and buildings derived from the aerial survey is important for subsequent preservation planning as well as building 3D scan provides a more detailed record of architectural forms and materials. The 3D settlement data from the aerial survey can be further applied to the 3D virtual model and animation of the settlement for virtual presentation. The information from the 3D scanning of the slate house can also be used for further digital archives and data queries through network resources. The results of this study show that, in large-scale settlement surveys, aerial surveying technology is used to construct the topography of settlements with buildings and spatial information of landscape, as well as the application of 3D scanning for small-scale records of individual buildings. This application of 3D technology, greatly increasing the efficiency and accuracy of survey and mapping work of aboriginal settlements, is much helpful for further preservation planning and rejuvenation of aboriginal cultural heritage.Keywords: aerial survey, 3D scanning, aboriginal settlement, settlement architecture cluster, ecological landscape area, old Paiwan settlements, slat house, photogrammetry, SfM, MVS), Point cloud, SIFT, DSM, 3D model
Procedia PDF Downloads 17384 Portable System for the Acquisition and Processing of Electrocardiographic Signals to Obtain Different Metrics of Heart Rate Variability
Authors: Daniel F. Bohorquez, Luis M. Agudelo, Henry H. León
Abstract:
Heart rate variability (HRV) is defined as the temporary variation between heartbeats or RR intervals (distance between R waves in an electrocardiographic signal). This distance is currently a recognized biomarker. With the analysis of the distance, it is possible to assess the sympathetic and parasympathetic nervous systems. These systems are responsible for the regulation of the cardiac muscle. The analysis allows health specialists and researchers to diagnose various pathologies based on this variation. For the acquisition and analysis of HRV taken from a cardiac electrical signal, electronic equipment and analysis software that work independently are currently used. This complicates and delays the process of interpretation and diagnosis. With this delay, the health condition of patients can be put at greater risk. This can lead to an untimely treatment. This document presents a single portable device capable of acquiring electrocardiographic signals and calculating a total of 19 HRV metrics. This reduces the time required, resulting in a timelier intervention. The device has an electrocardiographic signal acquisition card attached to a microcontroller capable of transmitting the cardiac signal wirelessly to a mobile device. In addition, a mobile application was designed to analyze the cardiac waveform. The device calculates the RR and different metrics. The application allows a user to visualize in real-time the cardiac signal and the 19 metrics. The information is exported to a cloud database for remote analysis. The study was performed under controlled conditions in the simulated hospital of the Universidad de la Sabana, Colombia. A total of 60 signals were acquired and analyzed. The device was compared against two reference systems. The results show a strong level of correlation (r > 0.95, p < 0.05) between the 19 metrics compared. Therefore, the use of the portable system evaluated in clinical scenarios controlled by medical specialists and researchers is recommended for the evaluation of the condition of the cardiac system.Keywords: biological signal análisis, heart rate variability (HRV), HRV metrics, mobile app, portable device.
Procedia PDF Downloads 18583 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems
Authors: Samuel Kaspi, Sitalakshmi Venkatraman
Abstract:
In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.Keywords: in-memory database, disk-based system, hybrid database, concurrency control
Procedia PDF Downloads 42082 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control
Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin
Abstract:
The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics
Procedia PDF Downloads 7581 A Strategy to Oil Production Placement Zones Based on Maximum Closeness
Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes
Abstract:
Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone
Procedia PDF Downloads 33180 Personalizing Human Physical Life Routines Recognition over Cloud-based Sensor Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study introduces state-of-the-art techniques for recognizing static and dy-namic patterns and forecasting those challenging activities from multi-fused sensors. Further-more, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, we acquired raw data is filtered with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.Keywords: artificial intelligence, machine learning, gait analysis, local binary pattern (LBP), statistical features, micro-electro-mechanical systems (MEMS), maximum relevance and minimum re-dundancy (MRMR)
Procedia PDF Downloads 2279 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas
Authors: Anand Malik
Abstract:
The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.Keywords: debris flow, geospatial data, GIS based modeling, flow-R
Procedia PDF Downloads 27478 Enhancing Healthcare Data Protection and Security
Authors: Joseph Udofia, Isaac Olufadewa
Abstract:
Everyday, the size of Electronic Health Records data keeps increasing as new patients visit health practitioner and returning patients fulfil their appointments. As these data grow, so is their susceptibility to cyber-attacks from criminals waiting to exploit this data. In the US, the damages for cyberattacks were estimated at $8 billion (2018), $11.5 billion (2019) and $20 billion (2021). These attacks usually involve the exposure of PII. Health data is considered PII, and its exposure carry significant impact. To this end, an enhancement of Health Policy and Standards in relation to data security, especially among patients and their clinical providers, is critical to ensure ethical practices, confidentiality, and trust in the healthcare system. As Clinical accelerators and applications that contain user data are used, it is expedient to have a review and revamp of policies like the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the Fast Healthcare Interoperability Resources (FHIR), all aimed to ensure data protection and security in healthcare. FHIR caters for healthcare data interoperability, FHIR caters to healthcare data interoperability, as data is being shared across different systems from customers to health insurance and care providers. The astronomical cost of implementation has deterred players in the space from ensuring compliance, leading to susceptibility to data exfiltration and data loss on the security accuracy of protected health information (PHI). Though HIPAA hones in on the security accuracy of protected health information (PHI) and PCI DSS on the security of payment card data, they intersect with the shared goal of protecting sensitive information in line with industry standards. With advancements in tech and the emergence of new technology, it is necessary to revamp these policies to address the complexity and ambiguity, cost barrier, and ever-increasing threats in cyberspace. Healthcare data in the wrong hands is a recipe for disaster, and we must enhance its protection and security to protect the mental health of the current and future generations.Keywords: cloud security, healthcare, cybersecurity, policy and standard
Procedia PDF Downloads 9377 AI for Efficient Geothermal Exploration and Utilization
Authors: Velimir Monty Vesselinov, Trais Kliplhuis, Hope Jasperson
Abstract:
Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal
Procedia PDF Downloads 5676 Tri/Tetra-Block Copolymeric Nanocarriers as a Potential Ocular Delivery System of Lornoxicam: Experimental Design-Based Preparation, in-vitro Characterization and in-vivo Estimation of Transcorneal Permeation
Authors: Alaa Hamed Salama, Rehab Nabil Shamma
Abstract:
Introduction: Polymeric micelles that can deliver drug to intended sites of the eye have attracted much scientific attention recently. The aim of this study was to review the aqueous-based formulation of drug-loaded polymeric micelles that hold significant promise for ophthalmic drug delivery. This study investigated the synergistic performance of mixed polymeric micelles made of linear and branched poly (ethylene oxide)-poly (propylene oxide) for the more effective encapsulation of Lornoxicam (LX) as a hydrophobic model drug. Methods: The co-micellization process of 10% binary systems combining different weight ratios of the highly hydrophilic poloxamers; Synperonic® PE/P84, and Synperonic® PE/F127 and the hydrophobic poloxamine counterpart (Tetronic® T701) was investigated by means of photon correlation spectroscopy and cloud point. The drug-loaded micelles were tested for their solubilizing capacity towards LX. Results: Results showed a sharp solubility increase from 0.46 mg/ml up to more than 4.34 mg/ml, representing about 136-fold increase. Optimized formulation was selected to achieve maximum drug solubilizing power and clarity with lowest possible particle size. The optimized formulation was characterized by 1HNMR analysis which revealed complete encapsulation of the drug within the micelles. Further investigations by histopathological and confocal laser studies revealed the non-irritant nature and good corneal penetrating power of the proposed nano-formulation. Conclusion: LX-loaded polymeric nanomicellar formulation was fabricated allowing easy application of the drug in the form of clear eye drops that do not cause blurred vision or discomfort, thus achieving high patient compliance.Keywords: confocal laser scanning microscopy, Histopathological studies, Lornoxicam, micellar solubilization
Procedia PDF Downloads 44975 Developing a Green Information Technology Model in Australian Higher-Educational Institutions
Authors: Mahnaz Jafari, Parisa Izadpanahi, Francesco Mancini, Muhammad Qureshi
Abstract:
The advancement in Information Technology (IT) has been an intrinsic element in the developments of the 21st century bringing benefits such as increased economic productivity. However, its widespread application has also been associated with inadvertent negative impacts on society and the environment necessitating selective interventions to mitigate these impacts. This study responded to this need by developing a Green IT Rating Tool (GIRT) for higher education institutions (HEI) in Australia to evaluate the sustainability of IT-related practices from an environmental, social, and economic perspective. Each dimension must be considered equally to achieve sustainability. The development of the GIRT was informed by the views of interviewed IT professionals whose opinions formed the basis of a framework listing Green IT initiatives in order of their importance as perceived by the interviewed professionals. This framework formed the base of the GIRT, which identified Green IT initiatives (such as videoconferencing as a substitute for long-distance travel) and the associated weighting of each practice. The proposed sustainable Green IT model could be integrated into existing IT systems, leading to significant reductions in carbon emissions and e-waste and improvements in energy efficiency. The development of the GIRT and the findings of this study have the potential to inspire other organizations to adopt sustainable IT practices, positively impact the environment, and be used as a reference by IT professionals and decision-makers to evaluate IT-related sustainability practices. The GIRT could also serve as a benchmark for HEIs to compare their performance with other institutions and to track their progress over time. Additionally, the study's results suggest that virtual and cloud-based technologies could reduce e-waste and energy consumption in the higher education sector. Overall, this study highlights the importance of incorporating Green IT practices into the IT systems of HEI to contribute to a more sustainable future.Keywords: green information technology, international higher-educational institution, sustainable solutions, environmentally friendly IT systems
Procedia PDF Downloads 7674 A Readiness Framework for Digital Innovation in Education: The Context of Academics and Policymakers in Higher Institutions of Learning to Assess the Preparedness of Their Institutions to Adopt and Incorporate Digital Innovation
Authors: Lufungula Osembe
Abstract:
The field of education has witnessed advances in technology and digital transformation. The methods of teaching have undergone significant changes in recent years, resulting in effects on various areas such as pedagogies, curriculum design, personalized teaching, gamification, data analytics, cloud-based learning applications, artificial intelligence tools, advanced plug-ins in LMS, and the emergence of multimedia creation and design. The field of education has not been immune to the changes brought about by digital innovation in recent years, similar to other fields such as engineering, health, science, and technology. There is a need to look at the variables/elements that digital innovation brings to education and develop a framework for higher institutions of learning to assess their readiness to create a viable environment for digital innovation to be successfully adopted. Given the potential benefits of digital innovation in education, it is essential to develop a framework that can assist academics and policymakers in higher institutions of learning to evaluate the effectiveness of adopting and adapting to the evolving landscape of digital innovation in education. The primary research question addressed in this study is to establish the preparedness of higher institutions of learning to adopt and adapt to the evolving landscape of digital innovation. This study follows a Design Science Research (DSR) paradigm to develop a framework for academics and policymakers in higher institutions of learning to evaluate the readiness of their institutions to adopt digital innovation in education. The Design Science Research paradigm is proposed to aid in developing a readiness framework for digital innovation in education. This study intends to follow the Design Science Research (DSR) methodology, which includes problem awareness, suggestion, development, evaluation, and conclusion. One of the major contributions of this study will be the development of the framework for digital innovation in education. Given the various opportunities offered by digital innovation in recent years, the need to create a readiness framework for digital innovation will play a crucial role in guiding academics and policymakers in their quest to align with emerging technologies facilitated by digital innovation in education.Keywords: digital innovation, DSR, education, opportunities, research
Procedia PDF Downloads 7173 Design and Development of an Autonomous Beach Cleaning Vehicle
Authors: Mahdi Allaoua Seklab, Süleyman BaşTürk
Abstract:
In the quest to enhance coastal environmental health, this study introduces a fully autonomous beach cleaning machine, a breakthrough in leveraging green energy and advanced artificial intelligence for ecological preservation. Designed to operate independently, the machine is propelled by a solar-powered system, underscoring a commitment to sustainability and the use of renewable energy in autonomous robotics. The vehicle's autonomous navigation is achieved through a sophisticated integration of LIDAR and a camera system, utilizing an SSD MobileNet V2 object detection model for accurate and real-time trash identification. The SSD framework, renowned for its efficiency in detecting objects in various scenarios, is coupled with the lightweight and precise highly MobileNet V2 architecture, making it particularly suited for the computational constraints of on-board processing in mobile robotics. Training of the SSD MobileNet V2 model was conducted on Google Colab, harnessing cloud-based GPU resources to facilitate a rapid and cost-effective learning process. The model was refined with an extensive dataset of annotated beach debris, optimizing the parameters using the Adam optimizer and a cross-entropy loss function to achieve high-precision trash detection. This capability allows the machine to intelligently categorize and target waste, leading to more effective cleaning operations. This paper details the design and functionality of the beach cleaning machine, emphasizing its autonomous operational capabilities and the novel application of AI in environmental robotics. The results showcase the potential of such technology to fill existing gaps in beach maintenance, offering a scalable and eco-friendly solution to the growing problem of coastal pollution. The deployment of this machine represents a significant advancement in the field, setting a new standard for the integration of autonomous systems in the service of environmental stewardship.Keywords: autonomous beach cleaning machine, renewable energy systems, coastal management, environmental robotics
Procedia PDF Downloads 2972 Discerning Divergent Nodes in Social Networks
Authors: Mehran Asadi, Afrand Agah
Abstract:
In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.Keywords: online social networks, data mining, social cloud computing, interaction and collaboration
Procedia PDF Downloads 16071 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method
Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang
Abstract:
Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series
Procedia PDF Downloads 27670 Redefining Health Information Systems with Machine Learning: Harnessing the Potential of AI-Powered Data Fusion Ecosystems
Authors: Shohoni Mahabub
Abstract:
Health Information Systems (HIS) are essential to contemporary healthcare; nonetheless, they frequently encounter challenges such as data fragmentation, inefficiencies, and an absence of real-time analytics. The advent of machine learning (ML) and artificial intelligence (AI) provides a revolutionary potential to address these difficulties via AI-driven data fusion ecosystems. These ecosystems integrate many health data sources, including electronic health records (EHRs), wearable devices, and genetic data, with sophisticated machine learning techniques such as natural language processing (NLP) and predictive analytics to produce actionable insights. Through the integration of strong data intake layers, secure interoperability protocols, and privacy-preserving models, these ecosystems provide individualized treatment, early illness diagnosis, and enhanced operational efficiency. This paradigm change enhances clinical decision-making and rectifies systemic inefficiencies in healthcare delivery. Nonetheless, adoption presents problems such as data privacy concerns, ethical considerations, and scalability constraints. The study examines options such as federated learning for safe, decentralized data sharing, explainable AI for transparency, and cloud-based infrastructure for scalability to address these issues. These ecosystems aim to address health equity disparities, particularly in resource-limited environments, and improve public health surveillance, notably in pandemic response initiatives. This article emphasizes the revolutionary potential of AI-driven data fusion ecosystems in redefining Health Information Systems by providing an implementation roadmap and showcasing successful deployment case studies. The suggested method promotes a cooperative initiative among legislators, healthcare professionals, and technology to establish a cohesive, efficient, and patient-centric healthcare model.Keywords: AI-powered healthcare systems, data fusion ecosystem, predictive analytics, digital health interoperability
Procedia PDF Downloads 1569 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications
Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso
Abstract:
The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.Keywords: interferometry, MIMO RADAR, SAR, tomography
Procedia PDF Downloads 19568 A Decadal Flood Assessment Using Time-Series Satellite Data in Cambodia
Authors: Nguyen-Thanh Son
Abstract:
Flood is among the most frequent and costliest natural hazards. The flood disasters especially affect the poor people in rural areas, who are heavily dependent on agriculture and have lower incomes. Cambodia is identified as one of the most climate-vulnerable countries in the world, ranked 13th out of 181 countries most affected by the impacts of climate change. Flood monitoring is thus a strategic priority at national and regional levels because policymakers need reliable spatial and temporal information on flood-prone areas to form successful monitoring programs to reduce possible impacts on the country’s economy and people’s likelihood. This study aims to develop methods for flood mapping and assessment from MODIS data in Cambodia. We processed the data for the period from 2000 to 2017, following three main steps: (1) data pre-processing to construct smooth time-series vegetation and water surface indices, (2) delineation of flood-prone areas, and (3) accuracy assessment. The results of flood mapping were verified with the ground reference data, indicating the overall accuracy of 88.7% and a Kappa coefficient of 0.77, respectively. These results were reaffirmed by close agreement between the flood-mapping area and ground reference data, with the correlation coefficient of determination (R²) of 0.94. The seasonally flooded areas observed for 2010, 2015, and 2016 were remarkably smaller than other years, mainly attributed to the El Niño weather phenomenon exacerbated by impacts of climate change. Eventually, although several sources potentially lowered the mapping accuracy of flood-prone areas, including image cloud contamination, mixed-pixel issues, and low-resolution bias between the mapping results and ground reference data, our methods indicated the satisfactory results for delineating spatiotemporal evolutions of floods. The results in the form of quantitative information on spatiotemporal flood distributions could be beneficial to policymakers in evaluating their management strategies for mitigating the negative effects of floods on agriculture and people’s likelihood in the country.Keywords: MODIS, flood, mapping, Cambodia
Procedia PDF Downloads 12867 IT-Based Global Healthcare Delivery System: An Alternative Global Healthcare Delivery System
Authors: Arvind Aggarwal
Abstract:
We have developed a comprehensive global healthcare delivery System based on information technology. It has medical consultation system where a virtual consultant can give medical consultation to the patients and Doctors at the digital medical centre after reviewing the patient’s EMR file consisting of patient’s history, investigations in the voice, images and data format. The system has the surgical operation system too, where a remote robotic consultant can conduct surgery at the robotic surgical centre. The instant speech and text translation is incorporated in the software where the patient’s speech and text (language) can be translated into the consultant’s language and vice versa. A consultant of any specialty (surgeon or Physician) based in any country can provide instant health care consultation, to any patient in any country without loss of time. Robotic surgeons based in any country in a tertiary care hospital can perform remote robotic surgery, through patient friendly telemedicine and tele-surgical centres. The patient EMR, financial data and data of all the consultants and robotic surgeons shall be stored in cloud. It is a complete comprehensive business model with healthcare medical and surgical delivery system. The whole system is self-financing and can be implemented in any country. The entire system uses paperless, filmless techniques. This eliminates the use of all consumables thereby reduces substantial cost which is incurred by consumables. The consultants receive virtual patients, in the form of EMR, thus the consultant saves time and expense to travel to the hospital to see the patients. The consultant gets electronic file ready for reporting & diagnosis. Hence time spent on the physical examination of the patient is saved, the consultant can, therefore, spend quality time in studying the EMR/virtual patient and give his instant advice. The time consumed per patient is reduced and therefore can see more number of patients, the cost of the consultation per patients is therefore reduced. The additional productivity of the consultants can be channelized to serve rural patients devoid of doctors.Keywords: e-health, telemedicine, telecare, IT-based healthcare
Procedia PDF Downloads 18166 Digital Twin for University Campus: Workflow, Applications and Benefits
Authors: Frederico Fialho Teixeira, Islam Mashaly, Maryam Shafiei, Jurij Karlovsek
Abstract:
The ubiquity of data gathering and smart technologies, advancements in virtual technologies, and the development of the internet of things (IoT) have created urgent demands for the development of frameworks and efficient workflows for data collection, visualisation, and analysis. Digital twin, in different scales of the city into the building, allows for bringing together data from different sources to generate fundamental and illuminating insights for the management of current facilities and the lifecycle of amenities as well as improvement of the performance of current and future designs. Over the past two decades, there has been growing interest in the topic of digital twin and their applications in city and building scales. Most such studies look at the urban environment through a homogeneous or generalist lens and lack specificity in particular characteristics or identities, which define an urban university campus. Bridging this knowledge gap, this paper offers a framework for developing a digital twin for a university campus that, with some modifications, could provide insights for any large-scale digital twin settings like towns and cities. It showcases how currently unused data could be purposefully combined, interpolated and visualised for producing analysis-ready data (such as flood or energy simulations or functional and occupancy maps), highlighting the potential applications of such a framework for campus planning and policymaking. The research integrates campus-level data layers into one spatial information repository and casts light on critical data clusters for the digital twin at the campus level. The paper also seeks to raise insightful and directive questions on how digital twin for campus can be extrapolated to city-scale digital twin. The outcomes of the paper, thus, inform future projects for the development of large-scale digital twin as well as urban and architectural researchers on potential applications of digital twin in future design, management, and sustainable planning, to predict problems, calculate risks, decrease management costs, and improve performance.Keywords: digital twin, smart campus, framework, data collection, point cloud
Procedia PDF Downloads 7065 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform
Authors: Khadija Refouh
Abstract:
Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms
Procedia PDF Downloads 15164 Empirical Analysis of the Effect of Cloud Movement in a Basic Off-Grid Photovoltaic System: Case Study Using Transient Response of DC-DC Converters
Authors: Asowata Osamede, Christo Pienaar, Johan Bekker
Abstract:
Mismatch in electrical energy (power) or outage from commercial providers, in general, does not promote development to the public and private sector, these basically limit the development of industries. The necessity for a well-structured photovoltaic (PV) system is of importance for an efficient and cost-effective monitoring system. The major renewable energy potential on earth is provided from solar radiation and solar photovoltaics (PV) are considered a promising technological solution to support the global transformation to a low-carbon economy and reduction on the dependence on fossil fuels. Solar arrays which consist of various PV module should be operated at the maximum power point in order to reduce the overall cost of the system. So power regulation and conditioning circuits should be incorporated in the set-up of a PV system. Power regulation circuits used in PV systems include maximum power point trackers, DC-DC converters and solar chargers. Inappropriate choice of power conditioning device in a basic off-grid PV system can attribute to power loss, hence the need for a right choice of power conditioning device to be coupled with the system of the essence. This paper presents the design and implementation of a power conditioning devices in order to improve the overall yield from the availability of solar energy and the system’s total efficiency. The power conditioning devices taken into consideration in the project includes the Buck and Boost DC-DC converters as well as solar chargers with MPPT. A logging interface circuit (LIC) is designed and employed into the system. The LIC is designed on a printed circuit board. It basically has DC current signalling sensors, specifically the LTS 6-NP. The LIC is consequently required to program the voltages in the system (these include the PV voltage and the power conditioning device voltage). The voltage is structured in such a way that it can be accommodated by the data logger. Preliminary results which include availability of power as well as power loss in the system and efficiency will be presented and this would be used to draw the final conclusion.Keywords: tilt and orientation angles, solar chargers, PV panels, storage devices, direct solar radiation
Procedia PDF Downloads 13663 IoT Based Soil Moisture Monitoring System for Indoor Plants
Authors: Gul Rahim Rahimi
Abstract:
The IoT-based soil moisture monitoring system for indoor plants is designed to address the challenges of maintaining optimal moisture levels in soil for plant growth and health. The system utilizes sensor technology to collect real-time data on soil moisture levels, which is then processed and analyzed using machine learning algorithms. This allows for accurate and timely monitoring of soil moisture levels, ensuring plants receive the appropriate amount of water to thrive. The main objectives of the system are twofold: to keep plants fresh and healthy by preventing water deficiency and to provide users with comprehensive insights into the water content of the soil on a daily and hourly basis. By monitoring soil moisture levels, users can identify patterns and trends in water consumption, allowing for more informed decision-making regarding watering schedules and plant care. The scope of the system extends to the agriculture industry, where it can be utilized to minimize the efforts required by farmers to monitor soil moisture levels manually. By automating the process of soil moisture monitoring, farmers can optimize water usage, improve crop yields, and reduce the risk of plant diseases associated with over or under-watering. Key technologies employed in the system include the Capacitive Soil Moisture Sensor V1.2 for accurate soil moisture measurement, the Node MCU ESP8266-12E Board for data transmission and communication, and the Arduino framework for programming and development. Additionally, machine learning algorithms are utilized to analyze the collected data and provide actionable insights. Cloud storage is utilized to store and manage the data collected from multiple sensors, allowing for easy access and retrieval of information. Overall, the IoT-based soil moisture monitoring system offers a scalable and efficient solution for indoor plant care, with potential applications in agriculture and beyond. By harnessing the power of IoT and machine learning, the system empowers users to make informed decisions about plant watering, leading to healthier and more vibrant indoor environments.Keywords: IoT-based, soil moisture monitoring, indoor plants, water management
Procedia PDF Downloads 5262 Prioritizing Biodiversity Conservation Areas based on the Vulnerability and the Irreplaceability Framework in Mexico
Authors: Alma Mendoza-Ponce, Rogelio Corona-Núñez, Florian Kraxner
Abstract:
Mexico is a megadiverse country and it has nearly halved its natural vegetation in the last century due to agricultural and livestock expansion. Impacts of land use cover change and climate change are unevenly distributed and spatial prioritization to minimize the affectations on biodiversity is crucial. Global and national efforts for prioritizing biodiversity conservation show that ~33% to 45% of Mexico should be protected. The width of these targets makes difficult to lead resources. We use a framework based on vulnerability and irreplaceability to prioritize conservation efforts in Mexico. Vulnerability considered exposure, sensitivity and adaptive capacity under two scenarios (business as usual, BAU based, on the SSP2 and RCP 4.5 and a Green scenario, based on the SSP1 and the RCP 2.6). Exposure to land use is the magnitude of change from natural vegetation to anthropogenic covers while exposure to climate change is the difference between current and future values for both scenarios. Sensitivity was considered as the number of endemic species of terrestrial vertebrates which are critically endangered and endangered. Adaptive capacity is used as the ration between the percentage of converted area (natural to anthropogenic) and the percentage of protected area at municipality level. The results suggest that by 2050, between 11.6 and 13.9% of Mexico show vulnerability ≥ 50%, and by 2070, between 12.0 and 14.8%, in the Green and BAU scenario, respectively. From an ecosystem perspective cloud forests, followed by tropical dry forests, natural grasslands and temperate forests will be the most vulnerable (≥ 50%). Amphibians are the most threatened vertebrates; 62% of the endemic amphibians are critically endangered or endangered while 39%, 12% and 9% of the mammals, birds, and reptiles, respectively. However, the distribution of these amphibians counts for only 3.3% of the country, while mammals, birds, and reptiles in these categories represent 10%, 16% and 29% of Mexico. There are 5 municipalities out of the 2,457 that Mexico has that represent 31% of the most vulnerable areas (70%).These municipalities account for 0.05% of Mexico. This multiscale approach can be used to address resources to conservation targets as ecosystems, municipalities or species considering land use cover change, climate change and biodiversity uniqueness.Keywords: biodiversity, climate change, land use change, Mexico, vulnerability
Procedia PDF Downloads 16861 The Association between Saharran Dust and Emergency Department Admission and Hospitalization in Gaziantep, Turkey
Authors: Behcet Al, Mustafa Bogan, Mehmet Murat Oktay, Suat Zengin, Hasan Bayram
Abstract:
Objective: In the last two decades there is a strong scientific interest regarding the role of aerosols for the Earth’s climate and associated changes. Aerosol particles are very important to the Earth-atmosphere climate system playing a crucial role in cloud and precipitation processes, air quality and climate. Here, we evaluated the association between saharran dust and emergency department admission, hospitalization, and mortality. Method: The records of admission to emergency department of Gaziantep University and the dust stroms of 31 months were studied. Patients admitted to ED at dust strom with chronic obstructive lung disease (COLD), asthma bronchiale (AB), serebrovascular events (SVE), acute myocardial infarction (AMI), stabile and unstabile angina pectoris (SAAP andUSAP); and the days with and without dust stroms were included. The study was realized from March 2010 to October 2012. The admission of three days before strom (group 1), during strom days (group 2) and three days after strom (group 3) were determined. The mean level of dust PM10 particulate was calculated, and the results were compared. Results: 5864 patients with chronic obstructive lung disease, asthma bronchiale, serebrovascular events, acute myocardial infarction, stabile and unstabile angyina pectoris admitted during the days with and without dust stroms. 28 dust stroms ocurred during 31 months. The totaliy of stroms continiued 78 days. Of admissions, 35.5% (n=2075) were in group1, 29.8% (n=1746) in group 2, and 34.8% (n=2043) were in group 3. The mean of PM10 for groups (group 1, 2 and 3) were 78.53 mg/m3 (range 19–276) particulate, 108.7 mg/m3 (range 34–631) particulate, and 60.9 mg/m3 (range 17–160) particulate respectively. The mean admission per a day for groups were 24.86, 22.55, and 24.50 respectively. The mortality was 12 in group 1, 12 in group 2, and 17 in grou 3. The hospitalization ratio for groups were 0.24, 0.27, and 0.27 respectively. Conclusion: However, the mean level of PM10 particulate for groups 2 (in dust strom days) is significantly higher (p=0.001) than the days before (group 1) and after (group 3) dust stroms, the mean admissions/day, hostilalization and mortality related to deseases (COLD, AB, SVE, AMI, SAAP andUSA) for group 2 is lower than the group 1 and group 3.Keywords: Saharran dust, PM10 particulate, emergency department admission, mortality
Procedia PDF Downloads 39660 Development of Internet of Things (IoT) with Mobile Voice Picking and Cargo Tracing Systems in Warehouse Operations of Third-Party Logistics
Authors: Eugene Y. C. Wong
Abstract:
The increased market competition, customer expectation, and warehouse operating cost in third-party logistics have motivated the continuous exploration in improving operation efficiency in warehouse logistics. Cargo tracing in ordering picking process consumes excessive time for warehouse operators when handling enormous quantities of goods flowing through the warehouse each day. Internet of Things (IoT) with mobile cargo tracing apps and database management systems are developed this research to facilitate and reduce the cargo tracing time in order picking process of a third-party logistics firm. An operation review is carried out in the firm with opportunities for improvement being identified, including inaccurate inventory record in warehouse management system, excessive tracing time on stored products, and product misdelivery. The facility layout has been improved by modifying the designated locations of various types of products. The relationship among the pick and pack processing time, cargo tracing time, delivery accuracy, inventory turnover, and inventory count operation time in the warehouse are evaluated. The correlation of the factors affecting the overall cycle time is analysed. A mobile app is developed with the use of MIT App Inventor and the Access management database to facilitate cargo tracking anytime anywhere. The information flow framework from warehouse database system to cloud computing document-sharing, and further to the mobile app device is developed. The improved performance on cargo tracing in the order processing cycle time of warehouse operators have been collected and evaluated. The developed mobile voice picking and tracking systems brings significant benefit to the third-party logistics firm, including eliminating unnecessary cargo tracing time in order picking process and reducing warehouse operators overtime cost. The mobile tracking device is further planned to enhance the picking time and cycle count of warehouse operators with voice picking system in the developed mobile apps as future development.Keywords: warehouse, order picking process, cargo tracing, mobile app, third-party logistics
Procedia PDF Downloads 37559 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study
Authors: Mohamed H. Khalil
Abstract:
Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.Keywords: GIS Web-Based, base-map, water network, decision support system
Procedia PDF Downloads 9858 The Competitiveness of Small and Medium Sized Enterprises: Digital Transformation of Business Models
Authors: Chante Van Tonder, Bart Bossink, Chris Schachtebeck, Cecile Nieuwenhuizen
Abstract:
Small and Medium-Sized Enterprises (SMEs) play a key role in national economies around the world, being contributors to economic and social well-being. Due to this, the success, growth and competitiveness of SMEs are critical. However, there are many factors that undermine this, such as resource constraints, poor information communication infrastructure (ICT), skills shortages and poor management. The Fourth Industrial Revolution offers new tools and opportunities such as digital transformation and business model innovation (BMI) to the SME sector to enhance its competitiveness. Adopting and leveraging digital technologies such as cloud, mobile technologies, big data and analytics can significantly improve business efficiencies, value proposition and customer experiences. Digital transformation can contribute to the growth and competitiveness of SMEs. However, SMEs are lagging behind in the participation of digital transformation. Extant research lacks conceptual and empirical research on how digital transformation drives BMI and the impact it has on the growth and competitiveness of SMEs. The purpose of the study is, therefore, to close this gap by developing and empirically validating a conceptual model to determine if SMEs are achieving BMI through digital transformation and how this is impacting the growth, competitiveness and overall business performance. An empirical study is being conducted on 300 SMEs, consisting of 150 South-African and 150 Dutch SMEs, to achieve this purpose. Structural equation modeling is used, since it is a multivariate statistical analysis technique that is used to analyse structural relationships and is a suitable research method to test the hypotheses in the model. Empirical research is needed to gather more insight into how and if SMEs are digitally transformed and how BMI can be driven through digital transformation. The findings of this study can be used by SME business owners, managers and employees at all levels. The findings will indicate if digital transformation can indeed impact the growth, competitiveness and overall performance of an SME, reiterating the importance and potential benefits of adopting digital technologies. In addition, the findings will also exhibit how BMI can be achieved in light of digital transformation. This study contributes to the body of knowledge in a highly relevant and important topic in management studies by analysing the impact of digital transformation on BMI on a large number of SMEs that are distinctly different in economic and cultural factorsKeywords: business models, business model innovation, digital transformation, SMEs
Procedia PDF Downloads 24057 Digital Immunity System for Healthcare Data Security
Authors: Nihar Bheda
Abstract:
Protecting digital assets such as networks, systems, and data from advanced cyber threats is the aim of Digital Immunity Systems (DIS), which are a subset of cybersecurity. With features like continuous monitoring, coordinated reactions, and long-term adaptation, DIS seeks to mimic biological immunity. This minimizes downtime by automatically identifying and eliminating threats. Traditional security measures, such as firewalls and antivirus software, are insufficient for enterprises, such as healthcare providers, given the rapid evolution of cyber threats. The number of medical record breaches that have occurred in recent years is proof that attackers are finding healthcare data to be an increasingly valuable target. However, obstacles to enhancing security include outdated systems, financial limitations, and a lack of knowledge. DIS is an advancement in cyber defenses designed specifically for healthcare settings. Protection akin to an "immune system" is produced by core capabilities such as anomaly detection, access controls, and policy enforcement. Coordination of responses across IT infrastructure to contain attacks is made possible by automation and orchestration. Massive amounts of data are analyzed by AI and machine learning to find new threats. After an incident, self-healing enables services to resume quickly. The implementation of DIS is consistent with the healthcare industry's urgent requirement for resilient data security in light of evolving risks and strict guidelines. With resilient systems, it can help organizations lower business risk, minimize the effects of breaches, and preserve patient care continuity. DIS will be essential for protecting a variety of environments, including cloud computing and the Internet of medical devices, as healthcare providers quickly adopt new technologies. DIS lowers traditional security overhead for IT departments and offers automated protection, even though it requires an initial investment. In the near future, DIS may prove to be essential for small clinics, blood banks, imaging centers, large hospitals, and other healthcare organizations. Cyber resilience can become attainable for the whole healthcare ecosystem with customized DIS implementations.Keywords: digital immunity system, cybersecurity, healthcare data, emerging technology
Procedia PDF Downloads 69