Search results for: hyperdimensional computing
249 Blended Intensive Programmes: A Way Forward to Promote Internationalization in Higher Education
Authors: Sonja Gögele, Petra Kletzenbauer
Abstract:
International strategies are ranked as one of the core activities in the development plans of Austrian universities. This has led to numerous promising activities in terms of internationalization (i.e. development of international degree programmes, increased staff and student mobility, and blended international projects). The latest innovative approach in terms of Erasmus+ are so called Blended Intensive Programmes (BIP) which combine jointly delivered teaching and learning elements of at least three participating ERASMUS universities in a virtual and short-term mobility setup. Students who participate in BIP can maintain their study plans at their home institution and include BIP as a parallel activity. This paper presents the experiences of this programme on the topic of sustainable computing hosted by the University of Applied Sciences FH JOANNEUM. By means of an online survey and face-to-face interviews with all stakeholders (20 students, 8 professors), the empirical study addresses the challenges of hosting an international blended learning programme (i.e. virtual phase and on-site intensive phase) and discusses the impact of such activities in terms of internationalization and Englishization. In this context, key roles are assigned to the development of future transnational and transdisciplinary curricula by considering innovative aspects for learning and teaching (i.e. virtual collaboration, research-based learning).Keywords: internationalization, englishization, short-term mobility, international teaching and learning
Procedia PDF Downloads 121248 Ontology Expansion via Synthetic Dataset Generation and Transformer-Based Concept Extraction
Authors: Andrey Khalov
Abstract:
The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.Keywords: ontology expansion, synthetic dataset, transformer fine-tuning, concept extraction, DOLCE, BERT, taxonomy, LLM, NER
Procedia PDF Downloads 20247 Fuzzy Approach for the Evaluation of Feasibility Levels of Vehicle Movement on the Disaster-Streaking Zone’s Roads
Authors: Gia Sirbiladze
Abstract:
Route planning problems are among the activities that have the highest impact on logistical planning, transportation, and distribution because of their effects on efficiency in resource management, service levels, and client satisfaction. In extreme conditions, the difficulty of vehicle movement between different customers causes the imprecision of time of movement and the uncertainty of the feasibility of movement. A feasibility level of vehicle movement on the closed route of the disaster-streaking zone is defined for the construction of an objective function. Experts’ evaluations of the uncertain parameters in q-rung ortho-pair fuzzy numbers (q-ROFNs) are presented. A fuzzy bi-objective combinatorial optimization problem of fuzzy vehicle routine problem (FVRP) is constructed based on the technique of possibility theory. The FVRP is reduced to the bi-criteria partitioning problem for the so-called “promising” routes which were selected from the all-admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in real-time computing. For the numerical solution of the bi-criteria partitioning problem, the -constraint approach is used. The main results' support software is designed. The constructed model is illustrated with a numerical example.Keywords: q-rung ortho-pair fuzzy sets, facility location selection problem, multi-objective combinatorial optimization problem, partitioning problem
Procedia PDF Downloads 136246 A Proposal for a Secure and Interoperable Data Framework for Energy Digitalization
Authors: Hebberly Ahatlan
Abstract:
The process of digitizing energy systems involves transforming traditional energy infrastructure into interconnected, data-driven systems that enhance efficiency, sustainability, and responsiveness. As smart grids become increasingly integral to the efficient distribution and management of electricity from both fossil and renewable energy sources, the energy industry faces strategic challenges associated with digitalization and interoperability — particularly in the context of modern energy business models, such as virtual power plants (VPPs). The critical challenge in modern smart grids is to seamlessly integrate diverse technologies and systems, including virtualization, grid computing and service-oriented architecture (SOA), across the entire energy ecosystem. Achieving this requires addressing issues like semantic interoperability, IT/OT convergence, and digital asset scalability, all while ensuring security and risk management. This paper proposes a four-layer digitalization framework to tackle these challenges, encompassing persistent data protection, trusted key management, secure messaging, and authentication of IoT resources. Data assets generated through this framework enable AI systems to derive insights for improving smart grid operations, security, and revenue generation. Furthermore, this paper also proposes a Trusted Energy Interoperability Alliance as a universal guiding standard in the development of this digitalization framework to support more dynamic and interoperable energy markets.Keywords: digitalization, IT/OT convergence, semantic interoperability, VPP, energy blockchain
Procedia PDF Downloads 185245 Julia-Based Computational Tool for Composite System Reliability Assessment
Authors: Josif Figueroa, Kush Bubbar, Greg Young-Morris
Abstract:
The reliability evaluation of composite generation and bulk transmission systems is crucial for ensuring a reliable supply of electrical energy to significant system load points. However, evaluating adequacy indices using probabilistic methods like sequential Monte Carlo Simulation can be computationally expensive. Despite this, it is necessary when time-varying and interdependent resources, such as renewables and energy storage systems, are involved. Recent advances in solving power network optimization problems and parallel computing have improved runtime performance while maintaining solution accuracy. This work introduces CompositeSystems, an open-source Composite System Reliability Evaluation tool developed in Julia™, to address the current deficiencies of commercial and non-commercial tools. This work introduces its design, validation, and effectiveness, which includes analyzing two different formulations of the Optimal Power Flow problem. The simulations demonstrate excellent agreement with existing published studies while improving replicability and reproducibility. Overall, the proposed tool can provide valuable insights into the performance of transmission systems, making it an important addition to the existing toolbox for power system planning.Keywords: open-source software, composite system reliability, optimization methods, Monte Carlo methods, optimal power flow
Procedia PDF Downloads 76244 A Unified Webcam Proctoring Solution on Edge
Authors: Saw Thiha, Jay Rajasekera
Abstract:
A boom in video conferencing generated millions of hours of video data daily to be analyzed. However, such enormous data pose certain scalability issues to be analyzed efficiently, let alone do it in real-time, as online conferences can involve hundreds of people and can last for hours. This paper proposes an efficient online proctoring solution that can analyze the online conferences real-time on edge devices such as Android, iOS, and desktops. Since the computation can be done upfront on the devices where online conferences take place, it can scale well without requiring intensive resources such as GPU servers and complex cloud infrastructure. According to the linear models, face orientation does indeed impact the perceived eye openness. Also, the proposed z score facial landmark standardization was proven to be functional in detecting face orientation and contributed to classifying eye blinks with single eyelid distance computation while achieving a better f1 score and accuracy than the Eye Aspect Ratio (EAR) threshold method. Last but not least, the authors implemented the solution natively in the MediaPipe framework and open-sourced it along with the reproducible experimental results on GitHub. The solution provides face orientation, eye blink, facial activity, and translation detections out of the box and is highly customizable and extensible.Keywords: android, desktop, edge computing, blink, face orientation, facial activity and translation, MediaPipe, open source, real-time, video conference, web, iOS, Z score facial landmark standardization
Procedia PDF Downloads 98243 Road Condition Monitoring Using Built-in Vehicle Technology Data, Drones, and Deep Learning
Authors: Judith Mwakalonge, Geophrey Mbatta, Saidi Siuhi, Gurcan Comert, Cuthbert Ruseruka
Abstract:
Transportation agencies worldwide continuously monitor their roads' conditions to minimize road maintenance costs and maintain public safety and rideability quality. Existing methods for carrying out road condition surveys involve manual observations of roads using standard survey forms done by qualified road condition surveyors or engineers either on foot or by vehicle. Automated road condition survey vehicles exist; however, they are very expensive since they require special vehicles equipped with sensors for data collection together with data processing and computing devices. The manual methods are expensive, time-consuming, infrequent, and can hardly provide real-time information for road conditions. This study contributes to this arena by utilizing built-in vehicle technologies, drones, and deep learning to automate road condition surveys while using low-cost technology. A single model is trained to capture flexible pavement distresses (Potholes, Rutting, Cracking, and raveling), thereby providing a more cost-effective and efficient road condition monitoring approach that can also provide real-time road conditions. Additionally, data fusion is employed to enhance the road condition assessment with data from vehicles and drones.Keywords: road conditions, built-in vehicle technology, deep learning, drones
Procedia PDF Downloads 128242 Symbolic Partial Differential Equations Analysis Using Mathematica
Authors: Davit Shahnazaryan, Diogo Gomes, Mher Safaryan
Abstract:
Many symbolic computations and manipulations required in the analysis of partial differential equations (PDE) or systems of PDEs are tedious and error-prone. These computations arise when determining conservation laws, entropies or integral identities, which are essential tools for the study of PDEs. Here, we discuss a new Mathematica package for the symbolic analysis of PDEs that automate multiple tasks, saving time and effort. Methodologies: During the research, we have used concepts of linear algebra and partial differential equations. We have been working on creating algorithms based on theoretical mathematics to find results mentioned below. Major Findings: Our package provides the following functionalities; finding symmetry group of different PDE systems, generation of polynomials invariant with respect to different symmetry groups; simplification of integral quantities by integration by parts and null Lagrangian cleaning, computing general forms of expressions by integration by parts; finding equivalent forms of an integral expression that are simpler or more symmetric form; determining necessary and sufficient conditions on the coefficients for the positivity of a given symbolic expression. Conclusion: Using this package, we can simplify integral identities, find conserved and dissipated quantities of time-dependent PDE or system of PDEs. Some examples in the theory of mean-field games and semiconductor equations are discussed.Keywords: partial differential equations, symbolic computation, conserved and dissipated quantities, mathematica
Procedia PDF Downloads 164241 Different Goals and Strategies of Smart Cities: Comparative Study between European and Asian Countries
Authors: Yountaik Leem, Sang Ho Lee
Abstract:
In this paper, different goals and the ways to reach smart cities shown in many countries during planning and implementation processes will be discussed. Each country dealt with technologies which have been embedded into space as development of ICTs (information and communication technologies) for their own purposes and by their own ways. For example, European countries tried to adapt technologies to reduce greenhouse gas emission to overcome global warming while US-based global companies focused on the way of life using ICTs such as EasyLiving of Microsoft™ and CoolTown of Hewlett-Packard™ during last decade of 20th century. In the North-East Asian countries, urban space with ICTs were developed in large scale on the viewpoint of capitalism. Ubiquitous city, first introduced in Korea which named after Marc Weiser’s concept of ubiquitous computing pursued new urban development with advanced technologies and high-tech infrastructure including wired and wireless network. Japan has developed smart cities as comprehensive and technology intensive cities which will lead other industries of the nation in the future. Not only the goals and strategies but also new directions to which smart cities are oriented also suggested at the end of the paper. Like a Finnish smart community whose slogan is ‘one more hour a day for citizens,’ recent trend is forwarding everyday lives and cultures of human beings, not capital gains nor physical urban spaces.Keywords: smart cities, urban strategy, future direction, comparative study
Procedia PDF Downloads 263240 Statistical Mechanical Approach in Modeling of Hybrid Solar Cells for Photovoltaic Applications
Authors: A. E. Kobryn
Abstract:
We present both descriptive and predictive modeling of structural properties of blends of PCBM or organic-inorganic hybrid perovskites of the type CH3NH3PbX3 (X=Cl, Br, I) with P3HT, P3BT or squaraine SQ2 dye sensitizer, including adsorption on TiO2 clusters having rutile (110) surface. In our study, we use a methodology that allows computing the microscopic structure of blends on the nanometer scale and getting insight on miscibility of its components at various thermodynamic conditions. The methodology is based on the integral equation theory of molecular liquids in the reference interaction site representation/model (RISM) and uses the universal force field. Input parameters for RISM, such as optimized molecular geometries and charge distribution of interaction sites, are derived with the use of the density functional theory methods. To compare the diffusivity of the PCBM in binary blends with P3HT and P3BT, respectively, the study is complemented with MD simulation. A very good agreement with experiment and the reports of alternative modeling or simulation is observed for PCBM in P3HT system. The performance of P3BT with perovskites, however, seems as expected. The calculated nanoscale morphologies of blends of P3HT, P3BT or SQ2 with perovskites, including adsorption on TiO2, are all new and serve as an instrument in rational design of organic/hybrid photovoltaics. They are used in collaboration with experts who actually make prototypes or devices for practical applications.Keywords: multiscale theory and modeling, nanoscale morphology, organic-inorganic halide perovskites, three dimensional distribution
Procedia PDF Downloads 156239 Improving Flash Flood Forecasting with a Bayesian Probabilistic Approach: A Case Study on the Posina Basin in Italy
Authors: Zviad Ghadua, Biswa Bhattacharya
Abstract:
The Flash Flood Guidance (FFG) provides the rainfall amount of a given duration necessary to cause flooding. The approach is based on the development of rainfall-runoff curves, which helps us to find out the rainfall amount that would cause flooding. An alternative approach, mostly experimented with Italian Alpine catchments, is based on determining threshold discharges from past events and on finding whether or not an oncoming flood has its magnitude more than some critical discharge thresholds found beforehand. Both approaches suffer from large uncertainties in forecasting flash floods as, due to the simplistic approach followed, the same rainfall amount may or may not cause flooding. This uncertainty leads to the question whether a probabilistic model is preferable over a deterministic one in forecasting flash floods. We propose the use of a Bayesian probabilistic approach in flash flood forecasting. A prior probability of flooding is derived based on historical data. Additional information, such as antecedent moisture condition (AMC) and rainfall amount over any rainfall thresholds are used in computing the likelihood of observing these conditions given a flash flood has occurred. Finally, the posterior probability of flooding is computed using the prior probability and the likelihood. The variation of the computed posterior probability with rainfall amount and AMC presents the suitability of the approach in decision making in an uncertain environment. The methodology has been applied to the Posina basin in Italy. From the promising results obtained, we can conclude that the Bayesian approach in flash flood forecasting provides more realistic forecasting over the FFG.Keywords: flash flood, Bayesian, flash flood guidance, FFG, forecasting, Posina
Procedia PDF Downloads 138238 Artificial Intelligence for Generative Modelling
Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta
Abstract:
As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques
Procedia PDF Downloads 152237 A Highly Efficient Broadcast Algorithm for Computer Networks
Authors: Ganesh Nandakumaran, Mehmet Karaata
Abstract:
A wave is a distributed execution, often made up of a broadcast phase followed by a feedback phase, requiring the participation of all the system processes before a particular event called decision is taken. Wave algorithms with one initiator such as the 1-wave algorithm have been shown to be very efficient for broadcasting messages in tree networks. Extensions of this algorithm broadcasting a sequence of waves using a single initiator have been implemented in algorithms such as the m-wave algorithm. However as the network size increases, having a single initiator adversely affects the message delivery times to nodes further away from the initiator. As a remedy, broadcast waves can be allowed to be initiated by multiple initiator nodes distributed across the network to reduce the completion time of broadcasts. These waves initiated by one or more initiator processes form a collection of waves covering the entire network. Solutions to global-snapshots, distributed broadcast and various synchronization problems can be solved efficiently using waves with multiple concurrent initiators. In this paper, we propose the first stabilizing multi-wave sequence algorithm implementing waves started by multiple initiator processes such that every process in the network receives at least one sequence of broadcasts. Due to being stabilizing, the proposed algorithm can withstand transient faults and do not require initialization. We view a fault as a transient fault if it perturbs the configuration of the system but not its program.Keywords: distributed computing, multi-node broadcast, propagation of information with feedback and cleaning (PFC), stabilization, wave algorithms
Procedia PDF Downloads 505236 Approaches to Ethical Hacking: A Conceptual Framework for Research
Authors: Lauren Provost
Abstract:
The digital world remains increasingly vulnerable, making the development of effective cybersecurity approaches even more critical in supporting the success of the digital economy and national security. Although approaches to cybersecurity have shifted and improved in the last decade with new models, especially with cloud computing and mobility, a record number of high severity vulnerabilities were recorded in the National Institute of Standards and Technology (NIST), and its National Vulnerability Database (NVD) in 2020. This is due, in part, to the increasing complexity of cyber ecosystems. Security must be approached with a more comprehensive, multi-tool strategy that addresses the complexity of cyber ecosystems, including the human factor. Ethical hacking has emerged as such an approach: a more effective, multi-strategy, comprehensive approach to cyber security's most pressing needs, especially understanding the human factor. Research on ethical hacking, however, is limited in scope. The two main objectives of this work are to (1) provide highlights of case studies in ethical hacking, (2) provide a conceptual framework for research in ethical hacking that embraces and addresses both technical and nontechnical security measures. Recommendations include an improved conceptual framework for research centered on ethical hacking that addresses many factors and attributes of significant attacks that threaten computer security; a more robust, integrative multi-layered framework embracing the complexity of cybersecurity ecosystems.Keywords: ethical hacking, literature review, penetration testing, social engineering
Procedia PDF Downloads 221235 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm
Procedia PDF Downloads 166234 Study of Superconducting Patch Printed on Electric-Magnetic Substrates Materials
Authors: Fortaki Tarek, S. Bedra
Abstract:
In this paper, the effects of both uniaxial anisotropy in the substrate and high Tc superconducting patch on the resonant frequency, half-power bandwidth, and radiation patterns are investigated using an electric field integral equation and the spectral domain Green’s function. The analysis has been based on a full electromagnetic wave model with London’s equations and the Gorter-Casimir two-fluid model has been improved to investigate the resonant and radiation characteristics of high Tc superconducting rectangular microstrip patch in the case where the patch is printed on electric-magnetic uniaxially anisotropic substrate materials. The stationary phase technique has been used for computing the radiation electric field. The obtained results demonstrate a considerable improvement in the half-power bandwidth, of the rectangular microstrip patch, by using a superconductor patch instead of a perfect conductor one. Further results show that high Tc superconducting rectangular microstrip patch on the uniaxial substrate with properly selected electric and magnetic anisotropy ratios is more advantageous than the one on the isotropic substrate by exhibiting wider bandwidth and radiation characteristic. This behavior agrees with that discovered experimentally for superconducting patches on isotropic substrates. The calculated results have been compared with measured one available in the literature and excellent agreement has been found.Keywords: high Tc superconducting microstrip patch, electric-magnetic anisotropic substrate, Galerkin method, surface complex impedance with boundary conditions, radiation patterns
Procedia PDF Downloads 445233 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network
Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson
Abstract:
The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0
Procedia PDF Downloads 183232 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 492231 The Time-Frequency Domain Reflection Method for Aircraft Cable Defects Localization
Authors: Reza Rezaeipour Honarmandzad
Abstract:
This paper introduces an aircraft cable fault detection and location method in light of TFDR keeping in mind the end goal to recognize the intermittent faults adequately and to adapt to the serial and after-connector issues being hard to be distinguished in time domain reflection. In this strategy, the correlation function of reflected and reference signal is used to recognize and find the airplane fault as per the qualities of reflected and reference signal in time-frequency domain, so the hit rate of distinguishing and finding intermittent faults can be enhanced adequately. In the work process, the reflected signal is interfered by the noise and false caution happens frequently, so the threshold de-noising technique in light of wavelet decomposition is used to diminish the noise interference and lessen the shortcoming alert rate. At that point the time-frequency cross connection capacity of the reference signal and the reflected signal based on Wigner-Ville appropriation is figured so as to find the issue position. Finally, LabVIEW is connected to execute operation and control interface, the primary capacity of which is to connect and control MATLAB and LABSQL. Using the solid computing capacity and the bottomless capacity library of MATLAB, the signal processing turn to be effortlessly acknowledged, in addition LabVIEW help the framework to be more dependable and upgraded effectively.Keywords: aircraft cable, fault location, TFDR, LabVIEW
Procedia PDF Downloads 479230 Performance Analysis of High Temperature Heat Pump Cycle for Industrial Process
Authors: Seon Tae Kim, Robert Hegner, Goksel Ozuylasi, Panagiotis Stathopoulos, Eberhard Nicke
Abstract:
High-temperature heat pumps (HTHP) that can supply heat at temperatures above 200°C can enhance the energy efficiency of industrial processes and reduce the CO₂ emissions connected with the heat supply of these processes. In the current work, the thermodynamic performance of 3 different vapor compression cycles, which use R-718 (water) as a working medium, have been evaluated by using a commercial process simulation tool (EBSILON Professional). All considered cycles use two-stage vapor compression with intercooling between stages. The main aim of the study is to compare different intercooling strategies and study possible heat recovery scenarios within the intercooling process. This comparison has been carried out by computing the coefficient of performance (COP), the heat supply temperature level, and the respective mass flow rate of water for all cycle architectures. With increasing temperature difference between the heat source and heat sink, ∆T, the COP values decreased as expected, and the highest COP value was found for the cycle configurations where both compressors have the same pressure ratio (PR). The investigation on the HTHP capacities with optimized PR and exergy analysis has also been carried out. The internal heat exchanger cycle with the inward direction of secondary flow (IHX-in) showed a higher temperature level and exergy efficiency compared to other cycles. Moreover, the available operating range was estimated by considering mechanical limitations.Keywords: high temperature heat pump, industrial process, vapor compression cycle, R-718 (water), thermodynamic analysis
Procedia PDF Downloads 150229 The Perception on 21st Century Skills of Nursing Instructors and Nursing Students at Boromarajonani College of Nursing, Chonburi
Authors: Kamolrat Turner, Somporn Rakkwamsuk, Ladda Leungratanamart
Abstract:
The aim of this descriptive study was to determine the perception of 21st century skills among nursing professors and nursing students at Boromarajonani College of Nursing, Chonburi. A total of 38 nursing professors and 75 second year nursing students took part in the study. Data were collected by 21st century skills questionnaires comprised of 63 items. Descriptive statistics were used to describe the findings. The results have shown that the overall mean scores of the perception of nursing professors on 21st century skills were at a high level. The highest mean scores were recorded for computing and ICT literacy, and career and leaning skills. The lowest mean scores were recorded for reading and writing and mathematics. The overall mean scores on perception of nursing students on 21st century skills were at a high level. The highest mean scores were recorded for computer and ICT literacy, for which the highest item mean scores were recorded for competency on computer programs. The lowest mean scores were recorded for the reading, writing, and mathematics components, in which the highest item mean score was reading Thai correctly, and the lowest item mean score was English reading and translate to other correctly. The findings from this study have shown that the perceptions of nursing professors were consistent with those of nursing students. Moreover, any activities aiming to raise capacity on English reading and translate information to others should be taken into the consideration.Keywords: 21st century skills, perception, nursing instructor, nursing student
Procedia PDF Downloads 319228 Characteristic Sentence Stems in Academic English Texts: Definition, Identification, and Extraction
Authors: Jingjie Li, Wenjie Hu
Abstract:
Phraseological units in academic English texts have been a central focus in recent corpus linguistic research. A wide variety of phraseological units have been explored, including collocations, chunks, lexical bundles, patterns, semantic sequences, etc. This paper describes a special category of clause-level phraseological units, namely, Characteristic Sentence Stems (CSSs), with a view to describing their defining criteria and extraction method. CSSs are contiguous lexico-grammatical sequences which contain a subject-predicate structure and which are frame expressions characteristic of academic writing. The extraction of CSSs consists of six steps: Part-of-speech tagging, n-gram segmentation, structure identification, significance of occurrence calculation, text range calculation, and overlapping sequence reduction. Significance of occurrence calculation is the crux of this study. It includes the computing of both the internal association and the boundary independence of a CSS and tests the occurring significance of the CSS from both inside and outside perspectives. A new normalization algorithm is also introduced into the calculation of LocalMaxs for reducing overlapping sequences. It is argued that many sentence stems are so recurrent in academic texts that the most typical of them have become the habitual ways of making meaning in academic writing. Therefore, studies of CSSs could have potential implications and reference value for academic discourse analysis, English for Academic Purposes (EAP) teaching and writing.Keywords: characteristic sentence stem, extraction method, phraseological unit, the statistical measure
Procedia PDF Downloads 170227 Cloud Enterprise Application Provider Selection Model for the Small and Medium Enterprise: A Pilot Study
Authors: Rowland R. Ogunrinde, Yusmadi Y. Jusoh, Noraini Che Pa, Wan Nurhayati W. Rahman, Azizol B. Abdullah
Abstract:
Enterprise Applications (EAs) aid the organizations achieve operational excellence and competitive advantage. Over time, most Small and Medium Enterprises (SMEs), which are known to be the major drivers of most thriving global economies, use the costly on-premise versions of these applications thereby making business difficult to competitively thrive in the same market environment with their large enterprise counterparts. The advent of cloud computing presents the SMEs an affordable offer and great opportunities as such EAs can be cloud-hosted and rented on a pay-per-use basis which does not require huge initial capital. However, as there are numerous Cloud Service Providers (CSPs) offering EAs as Software-as-a-Service (SaaS), there is a challenge of choosing a suitable provider with Quality of Service (QoS) that meet the organizations’ customized requirements. The proposed model takes care of that and goes a step further to select the most affordable among a selected few of the CSPs. In the earlier stage, before developing the instrument and conducting the pilot test, the researchers conducted a structured interview with three experts to validate the proposed model. In conclusion, the validity and reliability of the instrument were tested through experts, typical respondents, and analyzed with SPSS 22. Results confirmed the validity of the proposed model and the validity and reliability of the instrument.Keywords: cloud service provider, enterprise application, quality of service, selection criteria, small and medium enterprise
Procedia PDF Downloads 180226 Theoretical Analysis of the Optical and Solid State Properties of Thin Film
Authors: E. I. Ugwu
Abstract:
Theoretical analysis of the optical and Solid State properties of ZnS thin film using beam propagation technique in which a scalar wave is propagated through the material thin film deposited on a substrate with the assumption that the dielectric medium is section into a homogenous reference dielectric constant term, and a perturbed dielectric term, representing the deposited thin film medium is presented in this work. These two terms, constitute arbitrary complex dielectric function that describes dielectric perturbation imposed by the medium of for the system. This is substituted into a defined scalar wave equation in which the appropriate Green’s Function was defined on it and solved using series technique. The green’s value obtained from Green’s Function was used in Dyson’s and Lippmann Schwinger equations in conjunction with Born approximation method in computing the propagated field for different input regions of field wavelength during which the influence of the dielectric constants and mesh size of the thin film on the propagating field were depicted. The results obtained from the computed field were used in turn to generate the data that were used to compute the band gaps, solid state and optical properties of the thin film such as reflectance, Transmittance and reflectance with which the band gap obtained was found to be in close approximate to that of experimental value.Keywords: scalar wave, optical and solid state properties, thin film, dielectric medium, perturbation, Lippmann Schwinger equations, Green’s Function, propagation
Procedia PDF Downloads 439225 Decision Making System for Clinical Datasets
Authors: P. Bharathiraja
Abstract:
Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.Keywords: decision making, data mining, normalization, fuzzy rule, classification
Procedia PDF Downloads 519224 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO
Procedia PDF Downloads 421223 Decision Support: How Explainable A.I. Can Improve Transparency and Trust with Human Users
Authors: Devon Brown, Liu Chunmei
Abstract:
This paper will present an analysis as part of the researchers dissertation topic focusing on the intersection of affective and analytical directed acyclic graphs (DAGs) in the context of Decision Support Systems (DSS). The researcher’s work involves analyzing decision theory models like Affective and Bayesian Decision theory models and how they could be implemented under an Affective Computing Framework using Information Fusion and Human-Centered Design. Additionally, the researcher is beginning research on an Affective-Analytic Decision Framework (AADF) model for their dissertation research and are looking to merge logic and analytic models with empathetic insights into affective DAGs. Data-collection efforts begin Fall 2024 and in preparation for the efforts this paper looks to analyze previous research in this area and introduce the AADF framework and propose conceptual models for consideration. For this paper, the research emphasis is placed on analyzing Bayesian networks and Markov models which offer probabilistic techniques during uncertainty in decision-making. Ideally, including affect into analytic models will ensure algorithms can increase user trust with algorithms by including emotional states and the user’s experience with the goal of developing emotionally intelligent A.I. systems that can start to navigate the complex fabric of human emotion during decision-making.Keywords: decision support systems, explainable AI, HCAI techniques, affective-analytical decision framework
Procedia PDF Downloads 27222 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality
Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye
Abstract:
When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.Keywords: word embeddings, k-mer embedding, dimensionality reduction
Procedia PDF Downloads 141221 Enhanced Model for Risk-Based Assessment of Employee Security with Bring Your Own Device Using Cyber Hygiene
Authors: Saidu I. R., Shittu S. S.
Abstract:
As the trend of personal devices accessing corporate data continues to rise through Bring Your Own Device (BYOD) practices, organizations recognize the potential cost reduction and productivity gains. However, the associated security risks pose a significant threat to these benefits. Often, organizations adopt BYOD environments without fully considering the vulnerabilities introduced by human factors in this context. This study presents an enhanced assessment model that evaluates the security posture of employees in BYOD environments using cyber hygiene principles. The framework assesses users' adherence to best practices and guidelines for maintaining a secure computing environment, employing scales and the Euclidean distance formula. By utilizing this algorithm, the study measures the distance between users' security practices and the organization's optimal security policies. To facilitate user evaluation, a simple and intuitive interface for automated assessment is developed. To validate the effectiveness of the proposed framework, design science research methods are employed, and empirical assessments are conducted using five artifacts to analyze user suitability in BYOD environments. By addressing the human factor vulnerabilities through the assessment of cyber hygiene practices, this study aims to enhance the overall security of BYOD environments and enable organizations to leverage the advantages of this evolving trend while mitigating potential risks.Keywords: security, BYOD, vulnerability, risk, cyber hygiene
Procedia PDF Downloads 77220 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling
Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo
Abstract:
Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield
Procedia PDF Downloads 448