Search results for: computational domain
3264 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 1353263 Clustering of Association Rules of ISIS & Al-Qaeda Based on Similarity Measures
Authors: Tamanna Goyal, Divya Bansal, Sanjeev Sofat
Abstract:
In world-threatening terrorist attacks, where early detection, distinction, and prediction are effective diagnosis techniques and for functionally accurate and precise analysis of terrorism data, there are so many data mining & statistical approaches to assure accuracy. The computational extraction of derived patterns is a non-trivial task which comprises specific domain discovery by means of sophisticated algorithm design and analysis. This paper proposes an approach for similarity extraction by obtaining the useful attributes from the available datasets of terrorist attacks and then applying feature selection technique based on the statistical impurity measures followed by clustering techniques on the basis of similarity measures. On the basis of degree of participation of attributes in the rules, the associative dependencies between the attacks are analyzed. Consequently, to compute the similarity among the discovered rules, we applied a weighted similarity measure. Finally, the rules are grouped by applying using hierarchical clustering. We have applied it to an open source dataset to determine the usability and efficiency of our technique, and a literature search is also accomplished to support the efficiency and accuracy of our results.Keywords: association rules, clustering, similarity measure, statistical approaches
Procedia PDF Downloads 3203262 Development of Residual Power Series Methods for Efficient Solutions of Stiff Differential Equations
Authors: Gebreegziabher Hailu
Abstract:
This paper presents the development of residual power series methods aimed at efficiently solving stiff differential equations, which pose significant challenges in numerical analysis due to their rapid changes in solution behavior. The RPSM is a numerical approach that generates polynomial-based approximate solutions without the need for linearization, discretization, or perturbation techniques, making it straightforward to implement and less prone to computational errors. We introduce an approach that utilizes power series expansions combined with residual minimization techniques to enhance convergence and stability. By analyzing the theoretical foundations of stiffness, we delve into the formulation of the residual power series method, detailing how it effectively captures the dynamics of stiff systems while maintaining computational efficiency. Numerical experiments demonstrate the method's superiority in terms of accuracy and computational cost when compared to traditional methods like implicit Runge-Kutta or multistep techniques. We also explore adaptive strategies within our framework to automatically adjust parameters based on the stiffness characteristics of the problem at hand. Ultimately, our findings contribute to the broader toolkit for tackling stiff differential equations, offering a robust alternative that promises to streamline computational workflows in various applied mathematics and engineering contexts.Keywords: residual power series methods, stiff differential equoations, numerical approach, Runge Kutta methods
Procedia PDF Downloads 223261 Forced Vibration of a Planar Curved Beam on Pasternak Foundation
Authors: Akif Kutlu, Merve Ermis, Nihal Eratlı, Mehmet H. Omurtag
Abstract:
The objective of this study is to investigate the forced vibration analysis of a planar curved beam lying on elastic foundation by using the mixed finite element method. The finite element formulation is based on the Timoshenko beam theory. In order to solve the problems in frequency domain, the element matrices of two nodded curvilinear elements are transformed into Laplace space. The results are transformed back to the time domain by the well-known numerical Modified Durbin’s transformation algorithm. First, the presented finite element formulation is verified through the forced vibration analysis of a planar curved Timoshenko beam resting on Winkler foundation and the finite element results are compared with the results available in the literature. Then, the forced vibration analysis of a planar curved beam resting on Winkler-Pasternak foundation is conducted.Keywords: curved beam, dynamic analysis, elastic foundation, finite element method
Procedia PDF Downloads 3443260 On the Study of the Electromagnetic Scattering by Large Obstacle Based on the Method of Auxiliary Sources
Authors: Hidouri Sami, Aguili Taoufik
Abstract:
We consider fast and accurate solutions of scattering problems by large perfectly conducting objects (PEC) formulated by an optimization of the Method of Auxiliary Sources (MAS). We present various techniques used to reduce the total computational cost of the scattering problem. The first technique is based on replacing the object by an array of finite number of small (PEC) object with the same shape. The second solution reduces the problem on considering only the half of the object.These two solutions are compared to results from the reference bibliography.Keywords: method of auxiliary sources, scattering, large object, RCS, computational resources
Procedia PDF Downloads 2413259 Transient Voltage Distribution on the Single Phase Transmission Line under Short Circuit Fault Effect
Authors: A. Kojah, A. Nacaroğlu
Abstract:
Single phase transmission lines are used to transfer data or energy between two users. Transient conditions such as switching operations and short circuit faults cause the generation of the fluctuation on the waveform to be transmitted. Spatial voltage distribution on the single phase transmission line may change owing to the position and duration of the short circuit fault in the system. In this paper, the state space representation of the single phase transmission line for short circuit fault and for various types of terminations is given. Since the transmission line is modeled in time domain using distributed parametric elements, the mathematical representation of the event is given in state space (time domain) differential equation form. It also makes easy to solve the problem because of the time and space dependent characteristics of the voltage variations on the distributed parametrically modeled transmission line.Keywords: energy transmission, transient effects, transmission line, transient voltage, RLC short circuit, single phase
Procedia PDF Downloads 2233258 RASPE: Risk Advisory Smart System for Pipeline Projects in Egypt
Authors: Nael Y. Zabel, Maged E. Georgy, Moheeb E. Ibrahim
Abstract:
A knowledge-based expert system with the acronym RASPE is developed as an application tool to help decision makers in construction companies make informed decisions about managing risks in pipeline construction projects. Choosing to use expert systems from all available artificial intelligence techniques is due to the fact that an expert system is more suited to representing a domain’s knowledge and the reasoning behind domain-specific decisions. The knowledge-based expert system can capture the knowledge in the form of conditional rules which represent various project scenarios and potential risk mitigation/response actions. The built knowledge in RASPE is utilized through the underlying inference engine that allows the firing of rules relevant to a project scenario into consideration. This paper provides an overview of the knowledge acquisition process and goes about describing the knowledge structure which is divided up into four major modules. The paper shows one module in full detail for illustration purposes and concludes with insightful remarks.Keywords: expert system, knowledge management, pipeline projects, risk mismanagement
Procedia PDF Downloads 3103257 Topology Optimization of Heat and Mass Transfer for Two Fluids under Steady State Laminar Regime: Application on Heat Exchangers
Authors: Rony Tawk, Boutros Ghannam, Maroun Nemer
Abstract:
Topology optimization technique presents a potential tool for the design and optimization of structures involved in mass and heat transfer. The method starts with an initial intermediate domain and should be able to progressively distribute the solid and the two fluids exchanging heat. The multi-objective function of the problem takes into account minimization of total pressure loss and maximization of heat transfer between solid and fluid subdomains. Existing methods account for the presence of only one fluid, while the actual work extends optimization distribution of solid and two different fluids. This requires to separate the channels of both fluids and to ensure a minimum solid thickness between them. This is done by adding a third objective function to the multi-objective optimization problem. This article uses density approach where each cell holds two local design parameters ranging from 0 to 1, where the combination of their extremums defines the presence of solid, cold fluid or hot fluid in this cell. Finite volume method is used for direct solver coupled with a discrete adjoint approach for sensitivity analysis and method of moving asymptotes for numerical optimization. Several examples are presented to show the ability of the method to find a trade-off between minimization of power dissipation and maximization of heat transfer while ensuring the separation and continuity of the channel of each fluid without crossing or mixing the fluids. The main conclusion is the possibility to find an optimal bi-fluid domain using topology optimization, defining a fluid to fluid heat exchanger device.Keywords: topology optimization, density approach, bi-fluid domain, laminar steady state regime, fluid-to-fluid heat exchanger
Procedia PDF Downloads 3993256 A Transform Domain Function Controlled VSSLMS Algorithm for Sparse System Identification
Authors: Cemil Turan, Mohammad Shukri Salman
Abstract:
The convergence rate of the least-mean-square (LMS) algorithm deteriorates if the input signal to the filter is correlated. In a system identification problem, this convergence rate can be improved if the signal is white and/or if the system is sparse. We recently proposed a sparse transform domain LMS-type algorithm that uses a variable step-size for a sparse system identification. The proposed algorithm provided high performance even if the input signal is highly correlated. In this work, we investigate the performance of the proposed TD-LMS algorithm for a large number of filter tap which is also a critical issue for standard LMS algorithm. Additionally, the optimum value of the most important parameter is calculated for all experiments. Moreover, the convergence analysis of the proposed algorithm is provided. The performance of the proposed algorithm has been compared to different algorithms in a sparse system identification setting of different sparsity levels and different number of filter taps. Simulations have shown that the proposed algorithm has prominent performance compared to the other algorithms.Keywords: adaptive filtering, sparse system identification, TD-LMS algorithm, VSSLMS algorithm
Procedia PDF Downloads 3603255 Density functional (DFT), Study of the Structural and Phase Transition of ThC and ThN: LDA vs GGA Computational
Authors: Hamza Rekab Djabri, Salah Daoud
Abstract:
The present paper deals with the computational of structural and electronic properties of ThC and ThN compounds using density functional theory within generalized-gradient (GGA) apraximation and local density approximation (LDA). We employ the full potential linear muffin-tin orbitals (FP-LMTO) as implemented in the Lmtart code. We have used to examine structure parameter in eight different structures such as in NaCl (B1), CsCl (B2), ZB (B3), NiAs (B8), PbO (B10), Wurtzite (B4) , HCP (A3) βSn (A5) structures . The equilibrium lattice parameter, bulk modulus, and its pressure derivative were presented for all calculated phases. The calculated ground state properties are in good agreement with available experimental and theoretical results.Keywords: DFT, GGA, LDA, properties structurales, ThC, ThN
Procedia PDF Downloads 983254 Experimental Approach for Determining Hemi-Anechoic Characteristics of Engineering Acoustical Test Chambers
Authors: Santiago Montoya-Ospina, Raúl E. Jiménez-Mejía, Rosa Elvira Correa Gutiérrez
Abstract:
An experimental methodology is proposed for determining hemi-anechoic characteristics of an engineering acoustic room built at the facilities of Universidad Nacional de Colombia to evaluate the free-field conditions inside the chamber. Experimental results were compared with theoretical ones in both, the source and the sound propagation inside the chamber. Acoustic source was modeled by using monopole radiation pattern from punctual sources and the image method was considered for dealing with the reflective plane of the room, that means, the floor without insulation. Finite-difference time-domain (FDTD) method was implemented to calculate the sound pressure value at every spatial point of the chamber. Comparison between theoretical and experimental data yields to minimum error, giving satisfactory results for the hemi-anechoic characterization of the chamber.Keywords: acoustic impedance, finite-difference time-domain, hemi-anechoic characterization
Procedia PDF Downloads 1613253 Effect of Model Dimension in Numerical Simulation on Assessment of Water Inflow to Tunnel in Discontinues Rock
Authors: Hadi Farhadian, Homayoon Katibeh
Abstract:
Groundwater inflow to the tunnels is one of the most important problems in tunneling operation. The objective of this study is the investigation of model dimension effects on tunnel inflow assessment in discontinuous rock masses using numerical modeling. In the numerical simulation, the model dimension has an important role in prediction of water inflow rate. When the model dimension is very small, due to low distance to the tunnel border, the model boundary conditions affect the estimated amount of groundwater flow into the tunnel and results show a very high inflow to tunnel. Hence, in this study, the two-dimensional universal distinct element code (UDEC) used and the impact of different model parameters, such as tunnel radius, joint spacing, horizontal and vertical model domain extent has been evaluated. Results show that the model domain extent is a function of the most significant parameters, which are tunnel radius and joint spacing.Keywords: water inflow, tunnel, discontinues rock, numerical simulation
Procedia PDF Downloads 5243252 Application of Regularized Spatio-Temporal Models to the Analysis of Remote Sensing Data
Authors: Salihah Alghamdi, Surajit Ray
Abstract:
Space-time data can be observed over irregularly shaped manifolds, which might have complex boundaries or interior gaps. Most of the existing methods do not consider the shape of the data, and as a result, it is difficult to model irregularly shaped data accommodating the complex domain. We used a method that can deal with space-time data that are distributed over non-planner shaped regions. The method is based on partial differential equations and finite element analysis. The model can be estimated using a penalized least squares approach with a regularization term that controls the over-fitting. The model is regularized using two roughness penalties, which consider the spatial and temporal regularities separately. The integrated square of the second derivative of the basis function is used as temporal penalty. While the spatial penalty consists of the integrated square of Laplace operator, which is integrated exclusively over the domain of interest that is determined using finite element technique. In this paper, we applied a spatio-temporal regression model with partial differential equations regularization (ST-PDE) approach to analyze a remote sensing data measuring the greenness of vegetation, measure by an index called enhanced vegetation index (EVI). The EVI data consist of measurements that take values between -1 and 1 reflecting the level of greenness of some region over a period of time. We applied (ST-PDE) approach to irregular shaped region of the EVI data. The approach efficiently accommodates the irregular shaped regions taking into account the complex boundaries rather than smoothing across the boundaries. Furthermore, the approach succeeds in capturing the temporal variation in the data.Keywords: irregularly shaped domain, partial differential equations, finite element analysis, complex boundray
Procedia PDF Downloads 1403251 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 1093250 A Refinement Strategy Coupling Event-B and Planning Domain Definition Language (PDDL) for Planning Problems
Authors: Sabrine Ammar, Mohamed Tahar Bhiri
Abstract:
Automatic planning has a de facto standard language called Planning Domain Definition Language (PDDL) for describing planning problems. It aims to formalize the planning problems described by the concept of state space. PDDL-related dynamic analysis tools, namely planners and validators, are insufficient for verifying and validating PDDL descriptions. Indeed, these tools made it possible to detect errors a posteriori by means of test activity. In this paper, we recommend a formal approach coupling the two languages Event-B and PDDL, for automatic planning. Event-B is used for formal modeling by stepwise refinement with mathematical proofs of planning problems. Thus, this paper proposes a refinement strategy allowing to obtain reliable PDDL descriptions from an ultimate Event-B model correct by construction. The ultimate Event-B model, correct by construction which is supposed to be translatable into PDDL, is automatically translated into PDDL using our MDE Event-B2PDDL tool.Keywords: code generation, event-b, PDDL, refinement strategy, translation rules
Procedia PDF Downloads 1963249 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow
Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar
Abstract:
Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow
Procedia PDF Downloads 2033248 Combined Automatic Speech Recognition and Machine Translation in Business Correspondence Domain for English-Croatian
Authors: Sanja Seljan, Ivan Dunđer
Abstract:
The paper presents combined automatic speech recognition (ASR) for English and machine translation (MT) for English and Croatian in the domain of business correspondence. The first part presents results of training the ASR commercial system on two English data sets, enriched by error analysis. The second part presents results of machine translation performed by online tool Google Translate for English and Croatian and Croatian-English language pairs. Human evaluation in terms of usability is conducted and internal consistency calculated by Cronbach's alpha coefficient, enriched by error analysis. Automatic evaluation is performed by WER (Word Error Rate) and PER (Position-independent word Error Rate) metrics, followed by investigation of Pearson’s correlation with human evaluation.Keywords: automatic machine translation, integrated language technologies, quality evaluation, speech recognition
Procedia PDF Downloads 4843247 Design and Development of a Lead-Free BiFeO₃-BaTiO₃ Quenched Ceramics for High Piezoelectric Strain Performance
Authors: Muhammad Habib, Lin Tang, Guoliang Xue, Attaur Rahman, Myong-Ho Kim, Soonil Lee, Xuefan Zhou, Yan Zhang, Dou Zhang
Abstract:
Designing a high-performance, lead-free ceramic has become a cutting-edge research topic due to growing concerns about the toxic nature of lead-based materials. In this work, a convenient strategy of compositional design and domain engineering is applied to the lead-fee BiFeO₃-BaTiO₃ ceramics, which provides a flexible polarization-free-energy profile for domain switching. Here, simultaneously enhanced dynamic piezoelectric constant (d33* = 772 pm/V) and a good thermal-stability (d33* = 26% over the temperature of 20-180 ᵒC) are achieved with a high Curie temperature (TC) of 432 ᵒC. This high piezoelectric strain performance is collectively attributed to multiple effects such as thermal quenching, suppression of defect charges by donor doping, chemically induced local structure heterogeneity, and electric field-induced phase transition. Furthermore, the addition of BT content decreased octahedral tilting, reduced anisotropy for domain switching and increased tetragonality (cₜ/aₜ), providing a wider polar length for B-site cation displacement, leading to high piezoelectric strain performance. Atomic-resolution transmission electron microscopy and piezoelectric force microscopy combined with X-ray diffraction results strongly support the origin of high piezoelectricity. The high and temperature-stable piezoelectric strain response of this work is superior to those of other lead-free ceramics. The synergistic approach of composition design and the concept present here for the origin of high strain response provides a paradigm for the development of materials for high-temperature piezoelectric actuator applications.Keywords: Piezoelectric, BiFeO3-BaTiO3, Quenching, Temperature-insensitive
Procedia PDF Downloads 833246 Laser Therapy in Patients with Rheumatoid Arthritis: A Clinical Trial
Authors: Joao Paulo Matheus, Renan Fangel
Abstract:
Rheumatoid arthritis is a chronic, inflammatory, systemic and progressive disease that affects the synovial joints bilaterally, causing definitive orthopedic damage. It has a higher prevalence in postmenopausal female patients. It is a disabling disease that causes joint deformities that may compromise the functionality of the affected segment. The aim of this study was to evaluate the influence of low-intensity therapeutic laser on the perception of pain and quality of life in patients with rheumatoid arthritis. This is a randomized clinical study involving 6 women with a mean age of 56.8+6.3 years. Exclusion criteria: patients with acute pain, chronic infectious disease, underlying acute or chronic underlying disease. An AsGaAl laser with 808nm wavelength, 100mW power, beam output area of 0.028cm2, power density of 3.57W/cm2 was used. The laser was applied at pre-defined points in the interphalangeal and metacarpophalangeal joints, totaling 24 points, 2 times a week, for 4 weeks, totaling 8 sessions. The Pain Inventory (IBD) and Visual Analogue Scale (VAS) were used for the analysis of pain and for the WHOQOL-bref quality of life assessment. There was no statistical difference between the onset (5.67±2.66) and the final (4.67±3.78) of treatments (p=0.70). There was also no statistical difference between the beginning (5.67±2.66) and the final (4.67±3.78) of the treatments in the VAS analysis (p=0.68). The overall mean quality of life obtained by the questionnaire at the start of treatment was 42.3±7.6, while at the end of treatment it was 58.5±7.6 (p=0.01) and the domains of the questionnaire with significant differences were: psychological domain 42.9±6.8 and 66.7±12.9 (p=0.004), social domain 39.9±5.7 and 68.1±6.3 (p=0,0005) and environmental domain 36.3±7.3 and 56.3±12.5 (p=0.003). It can be concluded that the low-intensity therapeutic laser did not produce significant changes in the painful period of rheumatoid arthritis patients. However, there was an improvement in patients' quality of life in the psychological, social and environmental aspects.Keywords: laser therapy, pain, quality of life, rheumatoid arthritis
Procedia PDF Downloads 2503245 Improvement of the 3D Finite Element Analysis of High Voltage Power Transformer Defects in Time Domain
Authors: M. Rashid Hussain, Shady S. Refaat
Abstract:
The high voltage power transformer is the most essential part of the electrical power utilities. Reliability on the transformers is the utmost concern, and any failure of the transformers can lead to catastrophic losses in electric power utility. The causes of transformer failure include insulation failure by partial discharge, core and tank failure, cooling unit failure, current transformer failure, etc. For the study of power transformer defects, finite element analysis (FEA) can provide valuable information on the severity of defects. FEA provides a more accurate representation of complex geometries because they consider thermal, electrical, and environmental influences on the insulation models to obtain basic characteristics of the insulation system during normal and partial discharge conditions. The purpose of this paper is the time domain analysis of defects 3D model of high voltage power transformer using FEA to study the electric field distribution at different points on the defects.Keywords: power transformer, finite element analysis, dielectric response, partial discharge, insulation
Procedia PDF Downloads 1573244 Fluid Structure Interaction of Flow and Heat Transfer around a Microcantilever
Authors: Khalil Khanafer
Abstract:
This study emphasizes on analyzing the effect of flow conditions and the geometric variation of the microcantilever’s bluff body on the microcantilever detection capabilities within a fluidic device using a finite element fluid-structure interaction model. Such parameters include inlet velocity, flow direction, and height of the microcantilever’s supporting system within the fluidic cell. The transport equations are solved using a finite element formulation based on the Galerkin method of weighted residuals. For a flexible microcantilever, a fully coupled fluid-structure interaction (FSI) analysis is utilized and the fluid domain is described by an Arbitrary-Lagrangian–Eulerian (ALE) formulation that is fully coupled to the structure domain. The results of this study showed a profound effect on the magnitude and direction of the inlet velocity and the height of the bluff body on the deflection of the microcantilever. The vibration characteristics were also investigated in this study. This work paves the road for researchers to design efficient microcantilevers that display least errors in the measurements.Keywords: fluidic cell, FSI, microcantilever, flow direction
Procedia PDF Downloads 3743243 Mutiple Medical Landmark Detection on X-Ray Scan Using Reinforcement Learning
Authors: Vijaya Yuvaram Singh V M, Kameshwar Rao J V
Abstract:
The challenge with development of neural network based methods for medical is the availability of data. Anatomical landmark detection in the medical domain is a process to find points on the x-ray scan report of the patient. Most of the time this task is done manually by trained professionals as it requires precision and domain knowledge. Traditionally object detection based methods are used for landmark detection. Here, we utilize reinforcement learning and query based method to train a single agent capable of detecting multiple landmarks. A deep Q network agent is trained to detect single and multiple landmarks present on hip and shoulder from x-ray scan of a patient. Here a single agent is trained to find multiple landmark making it superior to having individual agents per landmark. For the initial study, five images of different patients are used as the environment and tested the agents performance on two unseen images.Keywords: reinforcement learning, medical landmark detection, multi target detection, deep neural network
Procedia PDF Downloads 1423242 An Insight into the Conformational Dynamics of Glycan through Molecular Dynamics Simulation
Authors: K. Veluraja
Abstract:
Glycan of glycolipids and glycoproteins is playing a significant role in living systems particularly in molecular recognition processes. Molecular recognition processes are attributed to their occurrence on the surface of the cell, sequential arrangement and type of sugar molecules present in the oligosaccharide structure and glyosidic linkage diversity (glycoinformatics) and conformational diversity (glycoconformatics). Molecular Dynamics Simulation study is a theoretical-cum-computational tool successfully utilized to establish glycoconformatics of glycan. The study on various oligosaccharides of glycan clearly indicates that oligosaccharides do exist in multiple conformational states and these conformational states arise due to the flexibility associated with a glycosidic torsional angle (φ,ψ) . As an example: a single disaccharide structure NeuNacα(2-3) Gal exists in three different conformational states due to the differences in the preferential value of glycosidic torsional angles (φ,ψ). Hence establishing three dimensional structural and conformational models for glycan (cartesian coordinates of every individual atoms of an oligosaccharide structure in a preferred conformation) is quite crucial to understand various molecular recognition processes such as glycan-toxin interaction and glycan-virus interaction. The gycoconformatics models obtained for various glycan through Molecular Dynamics Simulation stored in our 3DSDSCAR (3DSDSCAR.ORG) a public domain database and its utility value in understanding the molecular recognition processes and in drug design venture will be discussed.Keywords: glycan, glycoconformatics, molecular dynamics simulation, oligosaccharide
Procedia PDF Downloads 1373241 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 3973240 Integrating Wearable-Textiles Sensors and IoT for Continuous Electromyography Monitoring
Authors: Bulcha Belay Etana, Benny Malengier, Debelo Oljira, Janarthanan Krishnamoorthy, Lieva Vanlangenhove
Abstract:
Electromyography (EMG) is a technique used to measure the electrical activity of muscles. EMG can be used to assess muscle function in a variety of settings, including clinical, research, and sports medicine. The aim of this study was to develop a wearable textile sensor for EMG monitoring. The sensor was designed to be soft, stretchable, and washable, making it suitable for long-term use. The sensor was fabricated using a conductive thread material that was embroidered onto a fabric substrate. The sensor was then connected to a microcontroller unit (MCU) and a Wi-Fi-enabled module. The MCU was programmed to acquire the EMG signal and transmit it wirelessly to the Wi-Fi-enabled module. The Wi-Fi-enabled module then sent the signal to a server, where it could be accessed by a computer or smartphone. The sensor was able to successfully acquire and transmit EMG signals from a variety of muscles. The signal quality was comparable to that of commercial EMG sensors. The development of this sensor has the potential to improve the way EMG is used in a variety of settings. The sensor is soft, stretchable, and washable, making it suitable for long-term use. This makes it ideal for use in clinical settings, where patients may need to wear the sensor for extended periods of time. The sensor is also small and lightweight, making it ideal for use in sports medicine and research settings. The data for this study was collected from a group of healthy volunteers. The volunteers were asked to perform a series of muscle contractions while the EMG signal was recorded. The data was then analyzed to assess the performance of the sensor. The EMG signals were analyzed using a variety of methods, including time-domain analysis and frequency-domain analysis. The time-domain analysis was used to extract features such as the root mean square (RMS) and average rectified value (ARV). The frequency-domain analysis was used to extract features such as the power spectrum. The question addressed by this study was whether a wearable textile sensor could be developed that is soft, stretchable, and washable and that can successfully acquire and transmit EMG signals. The results of this study demonstrate that a wearable textile sensor can be developed that meets the requirements of being soft, stretchable, washable, and capable of acquiring and transmitting EMG signals. This sensor has the potential to improve the way EMG is used in a variety of settings.Keywords: EMG, electrode position, smart wearable, textile sensor, IoT, IoT-integrated textile sensor
Procedia PDF Downloads 753239 Optimization of the Measure of Compromise as a Version of Sorites Paradox
Authors: Aleksandar Hatzivelkos
Abstract:
The term ”compromise” is mostly used casually within the social choice theory. It is usually used as a mere result of the social choice function, and this omits its deeper meaning and ramifications. This paper is based on a mathematical model for the description of a compromise as a version of the Sorites paradox. It introduces a formal definition of d-measure of divergence from a compromise and models a notion of compromise that is often used only colloquially. Such a model for vagueness phenomenon, which lies at the core of the notion of compromise enables the introduction of new mathematical structures. In order to maximize compromise, different methods can be used. In this paper, we explore properties of a social welfare function TdM (from Total d-Measure), which is defined as a function which minimizes the total sum of d-measures of divergence over all possible linear orderings. We prove that TdM satisfy strict Pareto principle and behaves well asymptotically. Furthermore, we show that for certain domain restrictions, TdM satisfy positive responsiveness and IIIA (intense independence of irrelevant alternatives) thus being equivalent to Borda count on such domain restriction. This result gives new opportunities in social choice, especially when there is an emphasis on compromise in the decision-making process.Keywords: borda count, compromise, measure of divergence, minimization
Procedia PDF Downloads 1333238 Requirement Engineering and Software Product Line Scoping Paradigm
Authors: Ahmed Mateen, Zhu Qingsheng, Faisal Shahzad
Abstract:
Requirement Engineering (RE) is a part being created for programming structure during the software development lifecycle. Software product line development is a new topic area within the domain of software engineering. It also plays important role in decision making and it is ultimately helpful in rising business environment for productive programming headway. Decisions are central to engineering processes and they hold them together. It is argued that better decisions will lead to better engineering. To achieve better decisions requires that they are understood in detail. In order to address the issues, companies are moving towards Software Product Line Engineering (SPLE) which helps in providing large varieties of products with minimum development effort and cost. This paper proposed a new framework for software product line and compared with other models. The results can help to understand the needs in SPL testing, by identifying points that still require additional investigation. In our future scenario, we will combine this model in a controlled environment with industrial SPL projects which will be the new horizon for SPL process management testing strategies.Keywords: requirements engineering, software product lines, scoping, process structure, domain specific language
Procedia PDF Downloads 2243237 The Development of an Automated Computational Workflow to Prioritize Potential Resistance Variants in HIV Integrase Subtype C
Authors: Keaghan Brown
Abstract:
The prioritization of drug resistance mutations impacting protein folding or protein-drug and protein-DNA interactions within macromolecular systems is critical to the success of treatment regimens. With a continual increase in computational tools to assess these impacts, the need for scalability and reproducibility became an essential component of computational analysis and experimental research. Here it introduce a bioinformatics pipeline that combines several structural analysis tools in a simplified workflow, by optimizing the present computational hardware and software to automatically ease the flow of data transformations. Utilizing preestablished software tools, it was possible to develop a pipeline with a set of pre-defined functions that will automate mutation introduction into the HIV-1 Integrase protein structure, calculate the gain and loss of polar interactions and calculate the change in energy of protein fold. Additionally, an automated molecular dynamics analysis was implemented which reduces the constant need for user input and output management. The resulting pipeline, Automated Mutation Introduction and Analysis (AMIA) is an open source set of scripts designed to introduce and analyse the effects of mutations on the static protein structure as well as the results of the multi-conformational states from molecular dynamic simulations. The workflow allows the user to visualize all outputs in a user friendly manner thereby successfully enabling the prioritization of variant systems for experimental validation.Keywords: automated workflow, variant prioritization, drug resistance, HIV Integrase
Procedia PDF Downloads 773236 A Comparative Study between FEM and Meshless Methods
Authors: Jay N. Vyas, Sachin Daxini
Abstract:
Numerical simulation techniques are widely used now in product development and testing instead of expensive, time-consuming and sometimes dangerous laboratory experiments. Numerous numerical methods are available for performing simulation of physical problems of different engineering fields. Grid based methods, like Finite Element Method, are extensively used in performing various kinds of static, dynamic, structural and non-structural analysis during product development phase. Drawbacks of grid based methods in terms of discontinuous secondary field variable, dealing fracture mechanics and large deformation problems led to development of a relatively a new class of numerical simulation techniques in last few years, which are popular as Meshless methods or Meshfree Methods. Meshless Methods are expected to be more adaptive and flexible than Finite Element Method because domain descretization in Meshless Method requires only nodes. Present paper introduces Meshless Methods and differentiates it with Finite Element Method in terms of following aspects: Shape functions used, role of weight function, techniques to impose essential boundary conditions, integration techniques for discrete system equations, convergence rate, accuracy of solution and computational effort. Capabilities, benefits and limitations of Meshless Methods are discussed and concluded at the end of paper.Keywords: numerical simulation, Grid-based methods, Finite Element Method, Meshless Methods
Procedia PDF Downloads 3893235 A Near-Optimal Domain Independent Approach for Detecting Approximate Duplicates
Authors: Abdelaziz Fellah, Allaoua Maamir
Abstract:
We propose a domain-independent merging-cluster filter approach complemented with a set of algorithms for identifying approximate duplicate entities efficiently and accurately within a single and across multiple data sources. The near-optimal merging-cluster filter (MCF) approach is based on the Monge-Elkan well-tuned algorithm and extended with an affine variant of the Smith-Waterman similarity measure. Then we present constant, variable, and function threshold algorithms that work conceptually in a divide-merge filtering fashion for detecting near duplicates as hierarchical clusters along with their corresponding representatives. The algorithms take recursive refinement approaches in the spirit of filtering, merging, and updating, cluster representatives to detect approximate duplicates at each level of the cluster tree. Experiments show a high effectiveness and accuracy of the MCF approach in detecting approximate duplicates by outperforming the seminal Monge-Elkan’s algorithm on several real-world benchmarks and generated datasets.Keywords: data mining, data cleaning, approximate duplicates, near-duplicates detection, data mining applications and discovery
Procedia PDF Downloads 387