Search results for: resolution digital data
26123 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 6926122 E-Procurement Adoption and Effective Service Delivery in the Uganda Coffee Industry
Authors: Taus Muganda
Abstract:
This research explores the intricate relationship between e-procurement adoption and effective service delivery in the Uganda Coffee Industry, focusing on the processes involved, key actors, and the impact of digital transformation. The study is guided by three prominent theories, Actor-Network Theory, Resource-Based View Theory, and Institutional Theory to comprehensively explore the dynamics of e-procurement in the context of the coffee sector. The primary aim of this project is to examine the e-procurement adoption process and its role in enhancing service delivery within the Uganda Coffee Industry. The research questions guiding this inquiry are: firstly, whether e-procurement adoption and implementation contribute to achieving quality service delivery; and secondly, how e-procurement adoption can be effectively realized within the Uganda Coffee Industry. To address these questions, the study has laid out specific objectives. Firstly, it seeks to investigate the impact of e-procurement on effective service delivery, analysing how the integration of digital processes influences the overall quality of services provided in the coffee industry. Secondly, it aims to critically analyse the measures required to achieve effective delivery outcomes through the adoption and implementation of e-procurement, assessing the strategies that can maximize the benefits of digital transformation. Furthermore, the research endeavours to identify and examine the key actor’s instrumental in achieving effective service delivery within the Uganda Coffee Industry. By utilizing Actor-Network Theory, the study will elucidate the network of relationships and collaborations among actors involved in the e-procurement process. The research contributes to addressing a critical gap in the sector. Despite coffee being the leading export crop in Uganda, constituting 16% of total exports, there is a recognized need for digital transformation, specifically in the realm of e-procurement, to enhance the productivity of producers and contribute to the economic growth of the country. The study aims to provide insights into transforming the Uganda Coffee Industry by focusing on improving the e-procurement services delivered to actors in the coffee sector. The three forms of e-procurement investigated in this research—E-Sourcing, E-Payment, and E-Invoicing—serve as focal points in understanding the multifaceted dimensions of digital integration within the Uganda Coffee Industry. This research endeavours to offer practical recommendations for policymakers, industry stakeholders, and the UCDA to strategically leverage e-procurement for the benefit of the entire coffee value chain.Keywords: e-procurement, effective service delivery, actors, actor-network theory, resource-based view theory, institutional theory, e-invocing, e-payment, e-sourcing
Procedia PDF Downloads 7126121 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 30926120 Reviewing Privacy Preserving Distributed Data Mining
Authors: Sajjad Baghernezhad, Saeideh Baghernezhad
Abstract:
Nowadays considering human involved in increasing data development some methods such as data mining to extract science are unavoidable. One of the discussions of data mining is inherent distribution of the data usually the bases creating or receiving such data belong to corporate or non-corporate persons and do not give their information freely to others. Yet there is no guarantee to enable someone to mine special data without entering in the owner’s privacy. Sending data and then gathering them by each vertical or horizontal software depends on the type of their preserving type and also executed to improve data privacy. In this study it was attempted to compare comprehensively preserving data methods; also general methods such as random data, coding and strong and weak points of each one are examined.Keywords: data mining, distributed data mining, privacy protection, privacy preserving
Procedia PDF Downloads 52526119 Multimodal Database of Retina Images for Africa: The First Open Access Digital Repository for Retina Images in Sub Saharan Africa
Authors: Simon Arunga, Teddy Kwaga, Rita Kageni, Michael Gichangi, Nyawira Mwangi, Fred Kagwa, Rogers Mwavu, Amos Baryashaba, Luis F. Nakayama, Katharine Morley, Michael Morley, Leo A. Celi, Jessica Haberer, Celestino Obua
Abstract:
Purpose: The main aim for creating the Multimodal Database of Retinal Images for Africa (MoDRIA) was to provide a publicly available repository of retinal images for responsible researchers to conduct algorithm development in a bid to curb the challenges of ophthalmic artificial intelligence (AI) in Africa. Methods: Data and retina images were ethically sourced from sites in Uganda and Kenya. Data on medical history, visual acuity, ocular examination, blood pressure, and blood sugar were collected. Retina images were captured using fundus cameras (Foru3-nethra and Canon CR-Mark-1). Images were stored on a secure online database. Results: The database consists of 7,859 retinal images in portable network graphics format from 1,988 participants. Images from patients with human immunodeficiency virus were 18.9%, 18.2% of images were from hypertensive patients, 12.8% from diabetic patients, and the rest from normal’ participants. Conclusion: Publicly available data repositories are a valuable asset in the development of AI technology. Therefore, is a need for the expansion of MoDRIA so as to provide larger datasets that are more representative of Sub-Saharan data.Keywords: retina images, MoDRIA, image repository, African database
Procedia PDF Downloads 12726118 Quantification Model for Capability Evaluation of Optical-Based in-Situ Monitoring System for Laser Powder Bed Fusion (LPBF) Process
Authors: Song Zhang, Hui Wang, Johannes Henrich Schleifenbaum
Abstract:
Due to the increasing demand for quality assurance and reliability for additive manufacturing, the development of an advanced in-situ monitoring system is required to monitor the process anomalies as input for further process control. Optical-based monitoring systems, such as CMOS cameras and NIR cameras, are proved as effective ways to monitor the geometrical distortion and exceptional thermal distribution. Therefore, many studies and applications are focusing on the availability of the optical-based monitoring system for detecting varied types of defects. However, the capability of the monitoring setup is not quantified. In this study, a quantification model to evaluate the capability of the monitoring setups for the LPBF machine based on acquired monitoring data of a designed test artifact is presented, while the design of the relevant test artifacts is discussed. The monitoring setup is evaluated based on its hardware properties, location of the integration, and light condition. Methodology of data processing to quantify the capacity for each aspect is discussed. The minimal capability of the detectable size of the monitoring set up in the application is estimated by quantifying its resolution and accuracy. The quantification model is validated using a CCD camera-based monitoring system for LPBF machines in the laboratory with different setups. The result shows the model to quantify the monitoring system's performance, which makes the evaluation of monitoring systems with the same concept but different setups possible for the LPBF process and provides the direction to improve the setups.Keywords: data processing, in-situ monitoring, LPBF process, optical system, quantization model, test artifact
Procedia PDF Downloads 19726117 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates
Authors: Jennifer Buz, Alvin Spivey
Abstract:
The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation
Procedia PDF Downloads 12926116 Promoting Social Advocacy through Digital Storytelling: The Case of Ocean Acidification
Authors: Chun Chen Yea, Wen Huei Chou
Abstract:
Many chemical changes in the atmosphere and the ocean are invisible to the naked eye, but they have profound impacts. These changes not only confirm the phenomenon of global carbon pollution, but also forewarn that more changes are coming. The carbon dioxide gases emitted from the burning of fossil fuels dissolve into the ocean and chemically react with seawater to form carbonic acid, which increases the acidity of the originally alkaline seawater. This gradual acidification is occurring at an unprecedented rate and will affect the effective formation of carapace of some marine organisms such as corals and crustaceans, which are almost entirely composed of calcium carbonate. The carapace of these organisms will become more dissoluble. Acidified seawater not only threatens the survival of marine life, but also negatively impacts the global ecosystem via the food chain. Faced with the threat of ocean acidification, all humans are duty-bound. The industrial sector outputs the highest level of carbon dioxide emissions in Taiwan, and the petrochemical industry is the major contributor. Ever since the construction of Formosa Plastics Group's No. 6 Naphtha Cracker Plant in Yunlin County, there have been many environmental concerns such as air pollution and carbon dioxide emission. The marine life along the coast of Yunlin is directly affected by ocean acidification arising from the carbon emissions. Societal change demands our willingness to act, which is what social advocacy promotes. This study uses digital storytelling for social advocacy and ocean acidification as the subject of a visual narrative in visualization to demonstrate the subsequent promotion of social advocacy. Storytelling can transform dull knowledge into an engaging narrative of the crisis faced by marine life. Digital dissemination is an effective social-work practice. The visualization promoting awareness on ocean acidification disseminated via social media platforms, such as Facebook and Instagram. Social media enables users to compose their own messages and share information across different platforms, which helps disseminate the core message of social advocacy.Keywords: digital storytelling, visualization, ocean acidification, social advocacy
Procedia PDF Downloads 11726115 Localized Analysis of Cellulosic Fibrous Insulation Materials
Authors: Chady El Hachem, Pan Ye, Kamilia Abahri, Rachid Bennacer
Abstract:
Considered as a building construction material, and regarding its environmental benefits, wood fiber insulation is the material of interest in this work. The definition of adequate elementary representative volume that guarantees reliable understanding of the hygrothermal macroscopic phenomena is very critical. At the microscopic scale, when subjected to hygric solicitations, fibers undergo local dimensionless variations. It is therefore necessary to master this behavior, which affects the global response of the material. This study consists of an experimental procedure using the non-destructive method, X-ray tomography, followed by morphological post-processing analysis using ImageJ software. A refine investigation took place in order to identify the representative elementary volume and the sufficient resolution for accurate structural analysis. The second part of this work was to evaluate the microscopic hygric behavior of the studied material. Many parameters were taken into consideration, like the evolution of the fiber diameters, distribution along the sorption cycle and the porosity, and the water content evolution. In addition, heat transfer simulations based on the energy equation resolution were achieved on the real structure. Further, the problematic of representative elementary volume was elaborated for such heterogeneous material. Moreover, the material’s porosity and its fibers’ thicknesses show very big correlation with the water content. These results provide the literature with very good understanding of wood fiber insulation’s behavior.Keywords: hygric behavior, morphological characterization, wood fiber insulation material, x-ray tomography
Procedia PDF Downloads 26726114 Fake news and Conspiracy Narratives in the Covid-19 Crisis: An International Comparison
Authors: Caja Thimm
Abstract:
Already well before the Corona pandemic hit the world, ‘fake news‘ were no longer regarded as harmless twists of the truth but as intentionally composed disinformation, often with the goal of manipulative populist propaganda. During the Corona crisis, particularly conspiracy narratives have become a worldwide phenomenon with dangerous consequences (anti vaccination myths). The success of these manipulated news need s to be counteracted by trustworthy news, which in Europe particularly includes public broadcasting media and their social media channels. To understand better how the main public broadcasters in Germany, the UK, and France used Instagram strategically, a comparative study was carried out. The study – comparative analysis of Instagram during the Corona Crisis In our empirical study, we compared the activities by selected formats during the Corona crisis in order to see how the public broadcasters reached their audiences and how this might, in the longer run, affect journalistic strategies on social media platforms. First analysis showed that the increase in the use of social media overall was striking. Almost one in two adult online users (48 %) obtained information about the virus in social media, and in total, 38% of the younger age group (18-24) looked for Covid19 information on Instagram, so the platform can be regarded as one of the central digital spaces for Corona related information searches. Quantitative measures showed that 47% of recent posts by the broadcasters were related to Corona, and 7% treated conspiracy myths. For the more detailed content analysis, the following categories of analysis were applied: • Digital storytelling and instastories • Textuality and semantic keys • links to information • stickers • videochat • fact checking • news ticker • service • infografics and animated tables Additionally to these basic features, we particularly looked for new formats created during the crisis. Journalistic use of social media platforms opens up immediate and creative ways of applying the media logics of the respective platforms, and particularly the BBC and ARD formats proved to be interactive, responsive, and entertaining. Among them were new formats such as a space for user questions and personal uploads, interviews, music, comedy, etc. Particularly the fact checking channel got a lot of attention, as many user questions were focused on the conspiracy theories, which dominated the public discourse during many weeks in 2020. In the presentation, we will introduce eight particular strategies that show how public broadcasting journalism can adopt digital platforms and use them creatively and, hence help to counteract against conspiracy narratives and fake news.Keywords: fake news, social media, digital journalism, digital methods
Procedia PDF Downloads 15626113 The OQAM-OFDM System Using WPT/IWPT Replaced FFT/IFFT
Authors: Alaa H. Thabet, Ehab F. Badran, Moustafa H. Aly
Abstract:
With the rapid expand of wireless digital communications, demand for wireless systems that are reliable and have a high spectral efficiency have increased too. FBMC scheme based on the OFDM/OQAM has been recognized for its good performance to achieve high data rates. Fast Fourier Transforms (FFT) has been used to produce the orthogonal sub-carriers. Due to the drawbacks of OFDM -FFT based system which are the high peak-to-average ratio (PAR) and the synchronization. In this paper, Wavelet Packet Transform (WPT) is used in the place of FFT, and show better performance.Keywords: OQAM-OFDM, wavelet packet transform, PAPR, FFT
Procedia PDF Downloads 46026112 Cities Idioms Together with ICT and Countries Interested in the Smart City: A Review of Current Status
Authors: Qasim HamaKhurshid HamaMurad, Normal Mat Jusoh, Uznir Ujang
Abstract:
The concept of the city with an infrastructure of (information and communication) Technology embraces several definitions depending on the meanings of the word "smart" are (intelligent city, smart city, knowledge city, ubiquitous city, sustainable city, digital city). Many definitions of the city exist, but this chapter explores which one has been universally acknowledged. From literature analysis, it emerges that Smart City is the most used terminologies in literature through the digital database to indicate the smartness of a city. This paper share exploration the research from main seven website digital databases and journal about Smart City from "January 2015 to the February of 2020" to (a) Time research, to examine the causes of the Smart City phenomenon and other concept literature in the last five years (b) Review of words, to see how and where the smart city specification and relation different definition And(c) Geographical research to consider where Smart Cities' greatest concentrations are in the world and are Malaysia has interacting with the smart city, and (d) how many papers published from all Malaysia from 2015 to 2020 about smart citie. Three steps are followed to accomplish the goal. (1)The analysis covered publications Build a systematic literature review search strategy to gather a representative sub-set of papers on Smart City and other definitions utilizing (GoogleScholar, Elsevier, Scopus, ScienceDirect, IEEEXplore, WebofScience, Springer) January2015-February2020. (2)A bibliometric map was formed based on the bibliometric evaluation using the mapping technique VOSviewer to visualize differences. (3)VOSviewer application program was used to build initial clusters. The Map of Bibliometric Visualizes the analytical findings which targeted the word harmony.Keywords: bibliometric research, smart city, ICT, VOSviewer, urban modernization
Procedia PDF Downloads 20226111 Memristor-A Promising Candidate for Neural Circuits in Neuromorphic Computing Systems
Authors: Juhi Faridi, Mohd. Ajmal Kafeel
Abstract:
The advancements in the field of Artificial Intelligence (AI) and technology has led to an evolution of an intelligent era. Neural networks, having the computational power and learning ability similar to the brain is one of the key AI technologies. Neuromorphic computing system (NCS) consists of the synaptic device, neuronal circuit, and neuromorphic architecture. Memristor are a promising candidate for neuromorphic computing systems, but when it comes to neuromorphic computing, the conductance behavior of the synaptic memristor or neuronal memristor needs to be studied thoroughly in order to fathom the neuroscience or computer science. Furthermore, there is a need of more simulation work for utilizing the existing device properties and providing guidance to the development of future devices for different performance requirements. Hence, development of NCS needs more simulation work to make use of existing device properties. This work aims to provide an insight to build neuronal circuits using memristors to achieve a Memristor based NCS. Here we throw a light on the research conducted in the field of memristors for building analog and digital circuits in order to motivate the research in the field of NCS by building memristor based neural circuits for advanced AI applications. This literature is a step in the direction where we describe the various Key findings about memristors and its analog and digital circuits implemented over the years which can be further utilized in implementing the neuronal circuits in the NCS. This work aims to help the electronic circuit designers to understand how the research progressed in memristors and how these findings can be used in implementing the neuronal circuits meant for the recent progress in the NCS.Keywords: analog circuits, digital circuits, memristors, neuromorphic computing systems
Procedia PDF Downloads 17426110 Transmission Line Matrix (TLM) Modelling of Microstrip Circular Antenna
Authors: Jugoslav Jokovic, Tijana Dimitrijevic, Nebojsa Doncov
Abstract:
The goal of this paper is to investigate the possibilities and effectiveness of the TLM (Transmission Line Matrix) method for modelling of up-to-date microstrip antennas with circular geometry that have significant application in modern wireless communication systems. The coaxially fed microstrip antenna configurations with circular patch are analyzed by using the in-house 3DTLMcyl_cw solver based on computational electromagnetic TLM method adapted to the cylindrical grid and enhanced with the compact wire model. Opposed to the widely used rectangular TLM mesh, where a staircase approximation has to be used to describe curved boundaries, precise modelling of circular boundaries can be accomplished in the cylindrical grid irrespective of the mesh resolution. Using the compact wire model incorporated in cylindrical mesh, it is possible to model coaxial feed and include the influence of the real excitation in the antenna model. The conventional and inverted configuration of a coaxially fed circular patch antenna are considered, comparing the resonances obtained using TLM cylindrical model with results reached by the corresponding model in a rectangular grid as well as with experimental ones. Bearing in mind that accuracy of simulated results depends on a relevantly created model, besides structure geometry and dimensions, it is important to consider additional modelling issues, regarding appropriate mesh resolution and a relevant extension of a mesh around the considered structure that would provide convergence of the results.Keywords: computational electromagnetic, coaxial feed, microstrip antenna, TLM modelling
Procedia PDF Downloads 28026109 Interactive and Innovative Environments for Modeling Digital Educational Games and Animations
Authors: Ida Srdić, Luka Mandić, LidijaMandić
Abstract:
Digitization and intensive use of tablets, smartphones, the internet, mobile, and web applications have massively disrupted our habits, and the way audiences (especially youth) consume content. To introduce educational content in games and animations, and at the same time to keep it interesting and compelling for kids, is a challenge. In our work, we are comparing the different possibilities and potentials that digital games could provide to successfully mitigate direct connection with education. We analyze the main directions and educational methods in game-based learning and the possibilities of interactive modeling through questionnaires for user experience and requirements. A pre and post-quantitative survey will be conducted in order to measure levels of objective knowledge as well as the games perception. This approach enables quantitative and objective evaluation of the impact the game has on participants. Also, we will discuss the main barriers to the use of games in education and how games can be best used for learning.Keywords: Bloom’s taxonomy, epistemic games, learning objectives, virtual learning environments
Procedia PDF Downloads 9826108 Industry 4.0 Platforms as 'Cluster' ecosystems for small and medium enterprises (SMEs)
Authors: Vivek Anand, Rainer Naegele
Abstract:
Industry 4.0 is a global mega-trend revolutionizing the world of advanced manufacturing, but also bringing up challenges for SMEs. In response, many regional, as well as digital Industry 4.0 Platforms, have been set up to boost the competencies of established enterprises as well as SMEs. The concept of 'Clusters' is a policy tool that aims to be a starting point to establish sustainable and self-supporting structures in industries of a region by identifying competencies and supporting cluster actors with services that match their growth needs. This paper is motivated by the idea that Clusters have the potential to enable firms, particularly SMEs, to accelerate the innovation process and transition to digital technologies. In this research, the efficacy of Industry 4.0 platforms as Cluster ecosystems is evaluated, especially for SMEs. Focusing on the Baden Wurttemberg region in Germany, an action research method is employed to study how SMEs leverage other actors on Industry 4.0 Platforms to further their Industry 4.0 journeys. The aim is to evaluate how such Industry 4.0 platforms stimulate innovation, cooperation and competitiveness. Additionally, the barriers to these platforms fulfilling their promise to serve as capacity building cluster ecosystems for SMEs in a region will also be identified. The findings will be helpful for academicians and policymakers alike, who can leverage a ‘cluster policy’ to enable Industry 4.0 ecosystems in their regions. Furthermore, relevant management and policy implications stem from the analysis. This will also be of interest to the various players in a cluster ecosystem - like SMEs and service providers - who benefit from the cooperation and competition. The paper will improve the understanding of how a dialogue orientation, a bottom-up approach and active integration of all involved cluster actors enhance the potential of Industry 4.0 Platforms. A strong collaborative culture is a key driver of digital transformation and technology adoption across sectors, value chains and supply chains; and will position Industry 4.0 Platforms at the forefront of the industrial renaissance. Motivated by this argument and based on the results of the qualitative research, a roadmap will be proposed to position Industry 4.0 Platforms as effective clusters ecosystems to support Industry 4.0 adoption in a region.Keywords: cluster policy, digital transformation, industry 4.0, innovation clusters, innovation policy, SMEs and startups
Procedia PDF Downloads 22226107 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform
Procedia PDF Downloads 22626106 Automatic Diagnosis of Electrical Equipment Using Infrared Thermography
Authors: Y. Laib Dit Leksir, S. Bouhouche
Abstract:
Analysis and processing of data bases resulting from infrared thermal measurements made on the electrical installation requires the development of new tools in order to obtain correct and additional information to the visual inspections. Consequently, the methods based on the capture of infrared digital images show a great potential and are employed increasingly in various fields. Although, there is an enormous need for the development of effective techniques to analyse these data base in order to extract relevant information relating to the state of the equipments. Our goal consists in introducing recent techniques of modeling based on new methods, image and signal processing to develop mathematical models in this field. The aim of this work is to capture the anomalies existing in electrical equipments during an inspection of some machines using A40 Flir camera. After, we use binarisation techniques in order to select the region of interest and we make comparison between these methods of thermal images obtained to choose the best one.Keywords: infrared thermography, defect detection, troubleshooting, electrical equipment
Procedia PDF Downloads 47626105 A Newspapers Expectations Indicator from Web Scraping
Authors: Pilar Rey del Castillo
Abstract:
This document describes the building of an average indicator of the general sentiments about the future exposed in the newspapers in Spain. The raw data are collected through the scraping of the Digital Periodical and Newspaper Library website. Basic tools of natural language processing are later applied to the collected information to evaluate the sentiment strength of each word in the texts using a polarized dictionary. The last step consists of summarizing these sentiments to produce daily indices. The results are a first insight into the applicability of these techniques to produce periodic sentiment indicators.Keywords: natural language processing, periodic indicator, sentiment analysis, web scraping
Procedia PDF Downloads 13326104 Secure Data Sharing of Electronic Health Records With Blockchain
Authors: Kenneth Harper
Abstract:
The secure sharing of Electronic Health Records (EHRs) is a critical challenge in modern healthcare, demanding solutions to enhance interoperability, privacy, and data integrity. Traditional standards like Health Information Exchange (HIE) and HL7 have made significant strides in facilitating data exchange between healthcare entities. However, these approaches rely on centralized architectures that are often vulnerable to data breaches, lack sufficient privacy measures, and have scalability issues. This paper proposes a framework for secure, decentralized sharing of EHRs using blockchain technology, cryptographic tokens, and Non-Fungible Tokens (NFTs). The blockchain's immutable ledger, decentralized control, and inherent security mechanisms are leveraged to improve transparency, accountability, and auditability in healthcare data exchanges. Furthermore, we introduce the concept of tokenizing patient data through NFTs, creating unique digital identifiers for each record, which allows for granular data access controls and proof of data ownership. These NFTs can also be employed to grant access to authorized parties, establishing a secure and transparent data sharing model that empowers both healthcare providers and patients. The proposed approach addresses common privacy concerns by employing privacy-preserving techniques such as zero-knowledge proofs (ZKPs) and homomorphic encryption to ensure that sensitive patient information can be shared without exposing the actual content of the data. This ensures compliance with regulations like HIPAA and GDPR. Additionally, the integration of Fast Healthcare Interoperability Resources (FHIR) with blockchain technology allows for enhanced interoperability, enabling healthcare organizations to exchange data seamlessly and securely across various systems while maintaining data governance and regulatory compliance. Through real-world case studies and simulations, this paper demonstrates how blockchain-based EHR sharing can reduce operational costs, improve patient outcomes, and enhance the security and privacy of healthcare data. This decentralized framework holds great potential for revolutionizing healthcare information exchange, providing a transparent, scalable, and secure method for managing patient data in a highly regulated environment.Keywords: blockchain, electronic health records (ehrs), fast healthcare interoperability resources (fhir), health information exchange (hie), hl7, interoperability, non-fungible tokens (nfts), privacy-preserving techniques, tokens, secure data sharing,
Procedia PDF Downloads 2126103 Integrating Neural Linguistic Programming with Exergaming
Authors: Shyam Sajan, Kamal Bijlani
Abstract:
The widespread effects of digital media help people to explore the world more and get entertained with no effort. People became fond of these kind of sedentary life style. The increase in sedentary time and a decrease in physical activities has negative impacts on human health. Even though the addiction to video games has been exploited in exergames, to make people exercise and enjoy game challenges, the contribution is restricted only to physical wellness. This paper proposes creation and implementation of a game with the help of digital media in a virtual environment. The game is designed by collaborating ideas from neural linguistic programming and Stroop effect that can also be used to identify a person’s mental state, to improve concentration and to eliminate various phobias. The multiplayer game is played in a virtual environment created with Kinect sensor, to make the game more motivating and interactive.Keywords: exergaming, Kinect Sensor, Neural Linguistic Programming, Stroop Effect
Procedia PDF Downloads 43626102 Mobile Marketing Adoption in Pakistan
Authors: Manzoor Ahmad
Abstract:
The rapid advancement of mobile technology has transformed the way businesses engage with consumers, making mobile marketing a crucial strategy for organizations worldwide. This paper presents a comprehensive study on the adoption of mobile marketing in Pakistan, aiming to provide valuable insights into the current landscape, challenges, and opportunities in this emerging market. To achieve this objective, a mixed-methods approach was employed, combining quantitative surveys and qualitative interviews with industry experts, marketers, and consumers. The study encompassed a diverse range of sectors, including retail, telecommunications, banking, and e-commerce, ensuring a comprehensive understanding of mobile marketing practices across different industries. The findings indicate that mobile marketing has gained significant traction in Pakistan, with a growing number of organizations recognizing its potential for reaching and engaging with consumers effectively. Factors such as increasing smartphone penetration, affordable data plans, and the rise of social media usage have contributed to the widespread adoption of mobile marketing strategies. However, several challenges and barriers to mobile marketing adoption were identified. These include issues related to data privacy and security, limited digital literacy among consumers, inadequate infrastructure, and cultural considerations. Additionally, the study highlights the need for tailored and localized mobile marketing strategies to address the diverse cultural and linguistic landscape of Pakistan. Based on the insights gained from the study, practical recommendations are provided to support organizations in optimizing their mobile marketing efforts in Pakistan. These recommendations encompass areas such as consumer targeting, content localization, mobile app development, personalized messaging, and measurement of mobile marketing effectiveness. This research contributes to the existing literature on mobile marketing adoption in developing countries and specifically sheds light on the unique dynamics of the Pakistani market. It serves as a valuable resource for marketers, practitioners, and policymakers seeking to leverage mobile marketing strategies in Pakistan, ultimately fostering the growth and success of businesses operating in this region.Keywords: mobile marketing, digital marketing, mobile advertising, adoption of mobile marketing
Procedia PDF Downloads 10926101 Highly Accurate Target Motion Compensation Using Entropy Function Minimization
Authors: Amin Aghatabar Roodbary, Mohammad Hassan Bastani
Abstract:
One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.Keywords: automatic target recognition (ATR), high resolution range profile (HRRP), motion compensation, stepped frequency waveform technique (SFW), target motion parameters (TMPs)
Procedia PDF Downloads 15226100 A Context-Centric Chatbot for Cryptocurrency Using the Bidirectional Encoder Representations from Transformers Neural Networks
Authors: Qitao Xie, Qingquan Zhang, Xiaofei Zhang, Di Tian, Ruixuan Wen, Ting Zhu, Ping Yi, Xin Li
Abstract:
Inspired by the recent movement of digital currency, we are building a question answering system concerning the subject of cryptocurrency using Bidirectional Encoder Representations from Transformers (BERT). The motivation behind this work is to properly assist digital currency investors by directing them to the corresponding knowledge bases that can offer them help and increase the querying speed. BERT, one of newest language models in natural language processing, was investigated to improve the quality of generated responses. We studied different combinations of hyperparameters of the BERT model to obtain the best fit responses. Further, we created an intelligent chatbot for cryptocurrency using BERT. A chatbot using BERT shows great potential for the further advancement of a cryptocurrency market tool. We show that the BERT neural networks generalize well to other tasks by applying it successfully to cryptocurrency.Keywords: bidirectional encoder representations from transformers, BERT, chatbot, cryptocurrency, deep learning
Procedia PDF Downloads 14726099 Power of Sales and Marketing in Electronics Engineering with E-commerce: Connecting the Circuits
Authors: Muhammad Awais Kiani, Maryam Kiani
Abstract:
In today's digital age, the field of electronics engineering is experiencing unprecedented growth and innovation. To keep pace with this rapidly evolving industry, effective sales and marketing strategies are crucial, especially when combined with the power of e-commerce. This study explores the significance of integrating sales and marketing techniques with e-commerce platforms in the context of electronics engineering. It highlights the benefits, challenges, and best practices in leveraging e-commerce for sales and marketing in this industry. By embracing e-commerce, electronics engineering companies can reach a wider customer base, enhance brand visibility, and personalize customer experiences. Furthermore, this abstract delves into the importance of utilizing digital marketing tools such as search engine optimization (SEO), social media marketing, and content creation to optimize online sales. Therefore, this research aims to provide insights and recommendations for electronics engineering professionals to effectively navigate the dynamic landscape of sales and marketing in conjunction with e-commerce.Keywords: electronics engineering, marketing, sales, E-commerce
Procedia PDF Downloads 7526098 Using Electronic Books to Enhance the Museum Visitors' Experience
Authors: Elvin Karaaslan Klose
Abstract:
Museums are important sites of informal, often semi-structured and self-paced learning. Challenged by digital alternatives and increased expectations from their visitors, museums have to adapt to the digital age by enriching their collection and educational content with additional options for interactivity. One such option lies in the concept of the electronic book, which can be used either on dedicated devices or downloaded by visitors before entering the exhibition area. These electronic books serve as an alternative or supplement to the classic audio guide and provide visitors with information about artifacts as well as background stories and factoids about the subjects of the exhibition. Bringing such interactive elements into the museum experience has been shown to increase information retention and enjoyment among young aged visitors and adults. This article aims to bring together both theoretical frameworks and practical examples of how interactive media in the form of electronic books can be used to enhance the experience of the museum visitor.Keywords: electronic books, interactive media, arts education, museum education
Procedia PDF Downloads 21326097 A Network-Theorical Perspective on Music Analysis
Authors: Alberto Alcalá-Alvarez, Pablo Padilla-Longoria
Abstract:
The present paper describes a framework for constructing mathematical networks encoding relevant musical information from a music score for structural analysis. These graphs englobe statistical information about music elements such as notes, chords, rhythms, intervals, etc., and the relations among them, and so become helpful in visualizing and understanding important stylistic features of a music fragment. In order to build such networks, musical data is parsed out of a digital symbolic music file. This data undergoes different analytical procedures from Graph Theory, such as measuring the centrality of nodes, community detection, and entropy calculation. The resulting networks reflect important structural characteristics of the fragment in question: predominant elements, connectivity between them, and complexity of the information contained in it. Music pieces in different styles are analyzed, and the results are contrasted with the traditional analysis outcome in order to show the consistency and potential utility of this method for music analysis.Keywords: computational musicology, mathematical music modelling, music analysis, style classification
Procedia PDF Downloads 10226096 Digital Advance Care Planning and Directives: Early Observations of Adoption Statistics and Responses from an All-Digital Consumer-Driven Approach
Authors: Robert L. Fine, Zhiyong Yang, Christy Spivey, Bonnie Boardman, Maureen Courtney
Abstract:
Importance: Barriers to traditional advance care planning (ACP) and advance directive (AD) creation have limited the promise of ACP/AD for individuals and families, the healthcare team, and society. Reengineering ACP by using a web-based, consumer-driven process has recently been suggested. We report early experience with such a process. Objective: Begin to analyze the potential of the creation and use of ACP/ADs as generated by a consumer-friendly, digital process by 1) assessing the likelihood that consumers would create ACP/ADs without structured intervention by medical or legal professionals, and 2) analyzing the responses to determine if the plans can help doctors better understand a person’s goals, preferences, and priorities for their medical treatments and the naming of healthcare agents. Design: The authors chose 900 users of MyDirectives.com, a digital ACP/AD tool, solely based on their state of residence in order to achieve proportional representation of all 50 states by population size and then reviewed their responses, summarizing these through descriptive statistics including treatment preferences, demographics, and revision of preferences. Setting: General United States population. Participants: The 900 participants had an average age of 50.8 years (SD = 16.6); 84.3% of the men and 91% of the women were in self-reported good health when signing their ADs. Main measures: Preferences regarding the use of life-sustaining treatments, where to spend final days, consulting a supportive and palliative care team, attempted cardiopulmonary resuscitation (CPR), autopsy, and organ and tissue donation. Results: Nearly 85% of respondents prefer cessation of life-sustaining treatments during their final days whenever those may be, 76% prefer to spend their final days at home or in a hospice facility, and 94% wanted their future doctors to consult a supportive and palliative care team. 70% would accept attempted CPR in certain limited circumstances. Most respondents would want an autopsy under certain conditions, and 62% would like to donate their organs. Conclusions and relevance: Analysis of early experience with an all-digital web-based ACP/AD platform demonstrates that individuals from a wide range of ages and conditions can engage in an interrogatory process about values, goals, preferences, and priorities for their medical treatments by developing advance directives and easily make changes to the AD created. Online creation, storage, and retrieval of advance directives has the potential to remove barriers to ACP/AD and, thus, to further improve patient-centered end-of-life care.Keywords: Advance Care Plan, Advance Decisions, Advance Directives, Consumer; Digital, End of Life Care, Goals, Living Wills, Prefences, Universal Advance Directive, Statements
Procedia PDF Downloads 32726095 Predicting Wearable Technology Readiness in a South African Government Department: Exploring the Influence of Wearable Technology Acceptance and Positive Attitude
Authors: Henda J Thomas, Cornelia PJ Harmse, Cecile Schultz
Abstract:
Wearables are one of the technologies that will flourish within the fourth industrial revolution and digital transformation arenas, allowing employers to integrate collected data into organisational information systems. The study aimed to investigate whether wearable technology readiness can predict employees’ acceptance to wear wearables in the workplace. The factors of technology readiness predisposition that predict acceptance and positive attitudes towards wearable use in the workplace were examined. A quantitative research approach was used. The population consisted of 8 081 South African Department of Employment and Labour employees (DEL). Census sampling was used, and questionnaires to collect data were sent electronically to all 8 081 employees, 351 questionnaires were received back. The measuring instrument called the Technology Readiness and Acceptance Model (TRAM) was used in this study. Four hypotheses were formulated to investigate the relationship between readiness and acceptance of wearables in the workplace. The results found consistent predictions of technology acceptance (TA) by eagerness, optimism, and discomfort in the technology readiness (TR) scales. The TR scales of optimism and eagerness were consistent positive predictors of the TA scales, while discomfort proved to be a negative predictor for two of the three TA scales. Insecurity was found not to be a predictor of TA. It was recommended that the digital transformation policy of the DEL should be revised. Wearables in the workplace should be embraced from the viewpoint of convenience, automation, and seamless integration with the DEL information systems. The empirical contribution of this study can be seen in the fact that positive attitude emerged as a factor that extends the TRAM. In this study, positive attitude is identified as a new dimension to the TRAM not found in the original TA model and subsequent studies of the TRAM. Furthermore, this study found that Perceived Usefulness (PU) and Behavioural Intention to Use and (BIU) could not be separated but formed one factor. The methodological contribution of this study can lead to the development of a Wearable Readiness and Acceptance Model (WRAM). To the best of our knowledge, no author has yet introduced the WRAM into the body of knowledge.Keywords: technology acceptance model, technology readiness index, technology readiness and acceptance model, wearable devices, wearable technology, fourth industrial revolution
Procedia PDF Downloads 8926094 Civil Discourse in the Digital Age: Perceptions of Age as a Barrier to Civic Engagement
Authors: Julianne Viola
Abstract:
Young people are at a critical stage in their lives, developing from young participants to adult participants in democratic society. At this time, civic engagement is crucial for young people’s sense of belonging and future participation in their communities. In adolescence, individuals form their own identities and associations with others and may accomplish this with the help of technology and social media. In the Digital Age, young people and adults use technology as a platform to discuss political issues, including human rights and social justice but do not always engage in civil discourse. There is an urgent need to investigate this complex interplay of social media, identity formation, and civil discourse as it relates to how teenagers become participants in democratic society and how they engage in civil discourse. This qualitative study draws on theories of identity formation in adolescence and is situated within the literature surrounding teen civic engagement and technology use. Through in-depth interviews with participants ages 14 through 17, this study investigates the ways in which teens conceptualize their civic identities and engagement, presence online, and civil discourse. The context in which the young people in this study have grown up has the potential to impact and inform these processes. Early results of this study illustrate what it means to be a young person in today’s world, and how perceptions of others’ opinions may influence young people’s engagement in their communities and online. Participants in this study often indicated concerns of their age as a constraint on participation in their communities and in society, and a self-imposed restriction around the people with whom they engage in conversation about political and social issues. While the participants shared common concerns and experiences, each participant’s unique perspectives and beliefs are viewed with equal importance. The results from this research will help students, teachers, and community groups learn about the reasons for engagement and disengagement among this age group, and how technology has influenced teens’ dialogue about political issues. With this knowledge, academics and school leaders can devise new ways to best teach citizenship skills and civil discourse to students in the Digital Age.Keywords: civics, digital age, discourse, sociology of youth, youth studies
Procedia PDF Downloads 253