Search results for: parallel reasoning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1483

Search results for: parallel reasoning

1183 Determination of Surface Deformations with Global Navigation Satellite System Time Series

Authors: Ibrahim Tiryakioglu, Mehmet Ali Ugur, Caglar Ozkaymak

Abstract:

The development of GNSS technology has led to increasingly widespread and successful applications of GNSS surveys for monitoring crustal movements. However, multi-period GPS survey solutions have not been applied in monitoring vertical surface deformation. This study uses long-term GNSS time series that are required to determine vertical deformations. In recent years, the surface deformations that are parallel and semi-parallel to Bolvadin fault have occurred in Western Anatolia. These surface deformations have continued to occur in Bolvadin settlement area that is located mostly on alluvium ground. Due to these surface deformations, a number of cracks in the buildings located in the residential areas and breaks in underground water and sewage systems have been observed. In order to determine the amount of vertical surface deformations, two continuous GNSS stations have been established in the region. The stations have been operating since 2015 and 2017, respectively. In this study, GNSS observations from the mentioned two GNSS stations were processed with GAMIT/GLOBK (GNSS Analysis Massachusetts Institute of Technology/GLOBal Kalman) program package to create a coordinate time series. With the time series analyses, the GNSS stations’ behavior models (linear, periodical, etc.), the causes of these behaviors, and mathematical models were determined. The study results from the time series analysis of these two 2 GNSS stations shows approximately 50-80 mm/yr vertical movement.

Keywords: Bolvadin fault, GAMIT, GNSS time series, surface deformations

Procedia PDF Downloads 145
1182 Continuous-Time Analysis And Performance Assessment For Digital Control Of High-Frequency Switching Synchronous Dc-Dc Converter

Authors: Rihab Hamdi, Amel Hadri Hamida, Ouafae Bennis, Sakina Zerouali

Abstract:

This paper features a performance analysis and robustness assessment of a digitally controlled DC-DC three-cell buck converter associated in parallel, operating in continuous conduction mode (CCM), facing feeding parameters variation and loads disturbance. The control strategy relies on the continuous-time with an averaged modeling technique for high-frequency switching converter. The methodology is to modulate the complete design procedure, in regard to the existence of an instantaneous current operating point for designing the digital closed-loop, to the same continuous-time domain. Moreover, the adopted approach is to include a digital voltage control (DVC) technique, taking an account for digital control delays and sampling effects, which aims at improving efficiency and dynamic response and preventing generally undesired phenomena. The results obtained under load change, input change, and reference change clearly demonstrates an excellent dynamic response of the proposed technique, also as provide stability in any operating conditions, the effectiveness is fast with a smooth tracking of the specified output voltage. Simulations studies in MATLAB/Simulink environment are performed to verify the concept.

Keywords: continuous conduction mode, digital control, parallel multi-cells converter, performance analysis, power electronics

Procedia PDF Downloads 128
1181 Unsupervised Feature Learning by Pre-Route Simulation of Auto-Encoder Behavior Model

Authors: Youngjae Jin, Daeshik Kim

Abstract:

This paper describes a cycle accurate simulation results of weight values learned by an auto-encoder behavior model in terms of pre-route simulation. Given the results we visualized the first layer representations with natural images. Many common deep learning threads have focused on learning high-level abstraction of unlabeled raw data by unsupervised feature learning. However, in the process of handling such a huge amount of data, the learning method’s computation complexity and time limited advanced research. These limitations came from the fact these algorithms were computed by using only single core CPUs. For this reason, parallel-based hardware, FPGAs, was seen as a possible solution to overcome these limitations. We adopted and simulated the ready-made auto-encoder to design a behavior model in Verilog HDL before designing hardware. With the auto-encoder behavior model pre-route simulation, we obtained the cycle accurate results of the parameter of each hidden layer by using MODELSIM. The cycle accurate results are very important factor in designing a parallel-based digital hardware. Finally this paper shows an appropriate operation of behavior model based pre-route simulation. Moreover, we visualized learning latent representations of the first hidden layer with Kyoto natural image dataset.

Keywords: auto-encoder, behavior model simulation, digital hardware design, pre-route simulation, Unsupervised feature learning

Procedia PDF Downloads 423
1180 Unsteady Three-Dimensional Adaptive Spatial-Temporal Multi-Scale Direct Simulation Monte Carlo Solver to Simulate Rarefied Gas Flows in Micro/Nano Devices

Authors: Mirvat Shamseddine, Issam Lakkis

Abstract:

We present an efficient, three-dimensional parallel multi-scale Direct Simulation Monte Carlo (DSMC) algorithm for the simulation of unsteady rarefied gas flows in micro/nanosystems. The algorithm employs a novel spatiotemporal adaptivity scheme. The scheme performs a fully dynamic multi-level grid adaption based on the gradients of flow macro-parameters and an automatic temporal adaptation. The computational domain consists of a hierarchical octree-based Cartesian grid representation of the flow domain and a triangular mesh for the solid object surfaces. The hybrid mesh, combined with the spatiotemporal adaptivity scheme, allows for increased flexibility and efficient data management, rendering the framework suitable for efficient particle-tracing and dynamic grid refinement and coarsening. The parallel algorithm is optimized to run DSMC simulations of strongly unsteady, non-equilibrium flows over multiple cores. The presented method is validated by comparing with benchmark studies and then employed to improve the design of micro-scale hotwire thermal sensors in rarefied gas flows.

Keywords: DSMC, oct-tree hierarchical grid, ray tracing, spatial-temporal adaptivity scheme, unsteady rarefied gas flows

Procedia PDF Downloads 285
1179 Switching of Series-Parallel Connected Modules in an Array for Partially Shaded Conditions in a Pollution Intensive Area Using High Powered MOSFETs

Authors: Osamede Asowata, Christo Pienaar, Johan Bekker

Abstract:

Photovoltaic (PV) modules may become a trend for future PV systems because of their greater flexibility in distributed system expansion, easier installation due to their nature, and higher system-level energy harnessing capabilities under shaded or PV manufacturing mismatch conditions. This is as compared to the single or multi-string inverters. Novel residential scale PV arrays are commonly connected to the grid by a single DC–AC inverter connected to a series, parallel or series-parallel string of PV panels, or many small DC–AC inverters which connect one or two panels directly to the AC grid. With an increasing worldwide interest in sustainable energy production and use, there is renewed focus on the power electronic converter interface for DC energy sources. Three specific examples of such DC energy sources that will have a role in distributed generation and sustainable energy systems are the photovoltaic (PV) panel, the fuel cell stack, and batteries of various chemistries. A high-efficiency inverter using Metal Oxide Semiconductor Field-Effect Transistors (MOSFETs) for all active switches is presented for a non-isolated photovoltaic and AC-module applications. The proposed configuration features a high efficiency over a wide load range, low ground leakage current and low-output AC-current distortion with no need for split capacitors. The detailed power stage operating principles, pulse width modulation scheme, multilevel bootstrap power supply, and integrated gate drivers for the proposed inverter is described. Experimental results of a hardware prototype, show that not only are MOSFET efficient in the system, it also shows that the ground leakage current issues are alleviated in the proposed inverter and also a 98 % maximum associated driver circuit is achieved. This, in turn, provides the need for a possible photovoltaic panel switching technique. This will help to reduce the effect of cloud movements as well as improve the overall efficiency of the system.

Keywords: grid connected photovoltaic (PV), Matlab efficiency simulation, maximum power point tracking (MPPT), module integrated converters (MICs), multilevel converter, series connected converter

Procedia PDF Downloads 103
1178 Building a Hierarchical, Granular Knowledge Cube

Authors: Alexander Denzler, Marcel Wehrle, Andreas Meier

Abstract:

A knowledge base stores facts and rules about the world that applications can use for the purpose of reasoning. By applying the concept of granular computing to a knowledge base, several advantages emerge. These can be harnessed by applications to improve their capabilities and performance. In this paper, the concept behind such a construct, called a granular knowledge cube, is defined, and its intended use as an instrument that manages to cope with different data types and detect knowledge domains is elaborated. Furthermore, the underlying architecture, consisting of the three layers of the storing, representing, and structuring of knowledge, is described. Finally, benefits as well as challenges of deploying it are listed alongside application types that could profit from having such an enhanced knowledge base.

Keywords: granular computing, granular knowledge, hierarchical structuring, knowledge bases

Procedia PDF Downloads 473
1177 Save Balance of Power: Can We?

Authors: Swati Arun

Abstract:

The present paper argues that Balance of Power (BOP) needs to conjugate with certain contingencies like geography. It is evident that sea powers (‘insular’ for better clarity) are not balanced (if at all) in the same way as land powers. Its apparent that artificial insularity that the US has achieved reduces the chances of balancing (constant) and helps it maintain preponderance (variable). But how precise is this approach in assessing the dynamics between China’s rise and reaction of other powers and US. The ‘evolved’ theory can be validated by putting China and US in the equation. Systemic Relation between the nations was explained through the Balance of Power theory much before the systems theory was propounded. The BOP is the crux of functionality of ‘power relation’ dynamics which has played its role in the most astounding ways leading to situations of war and peace. Whimsical; but true that, the BOP has remained a complicated and indefinable concepts since Hans. Morganthau to Kenneth Waltz. A challenge of the BOP, however remains; “ that it has too many meanings”. In the recent times it has become evident that the myriad of expectations generated by BOP has not met the practicality of the current world politics. It is for this reason; the BoP has been replaced by Preponderance Theory (PT) to explain prevailing power situation. PT does provide an empirical reasoning for the success of this theory but fails in a abstract logical reasoning required for making a theory universal. Unipolarity clarifies the current system as one where balance of power has become redundant. It seems to reach beyond the contours of BoP, where a superpower does what it must to remain one. The centrality of this arguments pivots around - an exception, every time BOP fails to operate, preponderance of power emerges. PT does not sit well with the primary logic of a theory because it works on an exception. The evolution of such a pattern and system where BOP fails and preponderance emerges is absent. The puzzle here is- if BOP really has become redundant or it needs polishing. The international power structure changed from multipolar to bipolar to unipolar. BOP was looked at to provide inevitable logic behind such changes and answer the dilemma we see today- why US is unchecked, unbalanced? But why was Britain unchecked in 19th century and why China was unbalanced in 13th century? It is the insularity of the state that makes BOP reproduce “imbalance of power”, going a level up from off-shore balancer. This luxury of a state to maintain imbalance in the region of competition or threat is the causal relation between BOP’s and geography. America has applied imbalancing- meaning disequilibrium (in its favor) to maintain the regional balance so that over time the weaker does not get stronger and pose a competition. It could do that due to the significant parity present between the US and the rest.

Keywords: balance of power, china, preponderance of power, US

Procedia PDF Downloads 258
1176 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 188
1175 Effect of Class V Cavity Configuration and Loading Situation on the Stress Concentration

Authors: Jia-Yu Wu, Chih-Han Chang, Shu-Fen Chuang, Rong-Yang Lai

Abstract:

Objective: This study was to examine the stress distribution of tooth with different class V restorations under different loading situations and geometry by 3D finite element (FE) analysis. `Methods: A series of FE models of mandibular premolars containing class V cavities were constructed using micro-CT. The class V cavities were assigned as the combinations of different cavity depths x occlusal -gingival heights: 1x2, 1x4, 2x2, and 2x4 mm. Three alveolar bone loss conditions were examined: 0, 1, and 2 mm. 200 N force was exerted on the buccal cusp tip under various directions (vertical, V; obliquely 30° angled, O; oblique and parallel the individual occlusal cavity wall, P). A 3-D FE analysis was performed and the von-Mises stress was used to summarize the data of stress distribution and maximum stress. Results: The maximal stress did not vary in different alveolar bone heights. For each geometry, the maximal stress was found at bilateral corners of the cavity. The peak stress of restorations was significantly higher under load P compared to those under loads V and O while the latter two were similar. 2x2mm cavity exhibited significantly increased (2.88 fold) stress under load P compared to that under load V, followed by 1x2mm (2.11 fold), 2x4mm (1.98 fold) and 1x4mm (1.1fold). Conclusion: Load direction causes the greatest impact on the results of stress, while the effect of alveolar bone loss is minor. Load direction parallel to the cavity wall may enhance the stress concentration especially in deep and narrow class cavities.

Keywords: class v restoration, finite element analysis, loading situation, stress

Procedia PDF Downloads 227
1174 Comparative Study and Parallel Implementation of Stochastic Models for Pricing of European Options Portfolios using Monte Carlo Methods

Authors: Vinayak Bassi, Rajpreet Singh

Abstract:

Over the years, with the emergence of sophisticated computers and algorithms, finance has been quantified using computational prowess. Asset valuation has been one of the key components of quantitative finance. In fact, it has become one of the embryonic steps in determining risk related to a portfolio, the main goal of quantitative finance. This study comprises a drawing comparison between valuation output generated by two stochastic dynamic models, namely Black-Scholes and Dupire’s bi-dimensionality model. Both of these models are formulated for computing the valuation function for a portfolio of European options using Monte Carlo simulation methods. Although Monte Carlo algorithms have a slower convergence rate than calculus-based simulation techniques (like FDM), they work quite effectively over high-dimensional dynamic models. A fidelity gap is analyzed between the static (historical) and stochastic inputs for a sample portfolio of underlying assets. In order to enhance the performance efficiency of the model, the study emphasized the use of variable reduction methods and customizing random number generators to implement parallelization. An attempt has been made to further implement the Dupire’s model on a GPU to achieve higher computational performance. Furthermore, ideas have been discussed around the performance enhancement and bottleneck identification related to the implementation of options-pricing models on GPUs.

Keywords: monte carlo, stochastic models, computational finance, parallel programming, scientific computing

Procedia PDF Downloads 142
1173 Exploring Subjective Simultaneous Mixed Emotion Experiences in Middle Childhood

Authors: Esther Burkitt

Abstract:

Background: Evidence is mounting that mixed emotions can be experienced simultaneously in different ways across the lifespan. Four types of patterns of simultaneously mixed emotions (sequential, prevalent, highly parallel, and inverse types) have been identified in middle childhood and adolescence. Moreover, the recognition of these experiences tends to develop firstly when children consider peers rather than the self. This evidence from children and adolescents is based on examining the presence of experiences specified in adulthood. The present study, therefore, applied an exhaustive coding scheme to investigate whether children experience types of previously unidentified simultaneous mixed emotional experiences. Methodology: One hundred and twenty children (60 girls) aged 7 years 1 month - 9 years 2 months (X=8 years 1 month; SD = 10 months) were recruited from mainstream schools across the UK. Two age groups were formed (youngest, n = 61, 7 years 1 month- 8 years 1 months: oldest, n = 59, 8 years 2 months – 9 years 2 months) and allocated to one of two conditions hearing vignettes describing happy and sad mixed emotion events in age and gender-matched protagonist or themselves. Results: Loglinear analyses identified new types of flexuous, vertical, and other experiences along with established sequential, prevalent, highly parallel, and inverse types of experience. Older children recognised more complex experiences other than the self-condition. Conclusion: Several additional types of simultaneously mixed emotions are recognised in middle childhood. The theoretical relevance of simultaneous mixed emotion processing in childhood is considered, and the potential utility of the findings in emotion assessments is discussed.

Keywords: emotion, childhood, self, other

Procedia PDF Downloads 60
1172 Design and Preliminary Evaluation of Benzoxazolone-Based Agents for Targeting Mitochondrial-Located Translocator Protein

Authors: Nidhi Chadha, A. K. Tiwari, Marilyn D. Milton, Anil K. Mishra

Abstract:

Translocator protein (18 kDa) TSPO is highly expressed during microglia activation in neuroinflammation. Although a number of PET ligands have been developed for the visualization of activated microglia, one of the advantageous approaches is to develop potential optical imaging (OI) probe. Our study involves computational screening, synthesis and evaluation of TSPO ligand through various imaging modalities namely PET/SPECT/Optical. The initial computational screening involves pharmacophore modeling from the library designing having oxo-benzooxazol-3-yl-N-phenyl-acetamide groups and synthesis for visualization of efficacy of these compounds as multimodal imaging probes. Structure modeling of monomer, Ala147Thr mutated, parallel and anti-parallel TSPO dimers was performed and docking analysis was performed for distinct binding sites. Computational analysis showed pattern of variable binding profile of known diagnostic ligands and NBMP via interactions with conserved residues along with TSPO’s natural polymorphism of Ala147→Thr, which showed alteration in the binding affinity due to considerable changes in tertiary structure. Preliminary in vitro binding studies shows binding affinity in the range of 1-5 nm and selectivity was also certified by blocking studies. In summary, this skeleton was found to be potential probe for TSPO imaging due to ease in synthesis, appropriate lipophilicity and reach to specific region of brain.

Keywords: TSPO, molecular modeling, imaging, docking

Procedia PDF Downloads 436
1171 Compensatory Neuro-Fuzzy Inference (CNFI) Controller for Bilateral Teleoperation

Authors: R. Mellah, R. Toumi

Abstract:

This paper presents a new adaptive neuro-fuzzy controller equipped with compensatory fuzzy control (CNFI) in order to not only adjusts membership functions but also to optimize the adaptive reasoning by using a compensatory learning algorithm. The proposed control structure includes both CNFI controllers for which one is used to control in force the master robot and the second one for controlling in position the slave robot. The experimental results obtained, show a fairly high accuracy in terms of position and force tracking under free space motion and hard contact motion, what highlights the effectiveness of the proposed controllers.

Keywords: compensatory fuzzy, neuro-fuzzy, control adaptive, teleoperation

Procedia PDF Downloads 303
1170 The Effective Use of the Network in the Distributed Storage

Authors: Mamouni Mohammed Dhiya Eddine

Abstract:

This work aims at studying the exploitation of high-speed networks of clusters for distributed storage. Parallel applications running on clusters require both high-performance communications between nodes and efficient access to the storage system. Many studies on network technologies led to the design of dedicated architectures for clusters with very fast communications between computing nodes. Efficient distributed storage in clusters has been essentially developed by adding parallelization mechanisms so that the server(s) may sustain an increased workload. In this work, we propose to improve the performance of distributed storage systems in clusters by efficiently using the underlying high-performance network to access distant storage systems. The main question we are addressing is: do high-speed networks of clusters fit the requirements of a transparent, efficient and high-performance access to remote storage? We show that storage requirements are very different from those of parallel computation. High-speed networks of clusters were designed to optimize communications between different nodes of a parallel application. We study their utilization in a very different context, storage in clusters, where client-server models are generally used to access remote storage (for instance NFS, PVFS or LUSTRE). Our experimental study based on the usage of the GM programming interface of MYRINET high-speed networks for distributed storage raised several interesting problems. Firstly, the specific memory utilization in the storage access system layers does not easily fit the traditional memory model of high-speed networks. Secondly, client-server models that are used for distributed storage have specific requirements on message control and event processing, which are not handled by existing interfaces. We propose different solutions to solve communication control problems at the filesystem level. We show that a modification of the network programming interface is required. Data transfer issues need an adaptation of the operating system. We detail several propositions for network programming interfaces which make their utilization easier in the context of distributed storage. The integration of a flexible processing of data transfer in the new programming interface MYRINET/MX is finally presented. Performance evaluations show that its usage in the context of both storage and other types of applications is easy and efficient.

Keywords: distributed storage, remote file access, cluster, high-speed network, MYRINET, zero-copy, memory registration, communication control, event notification, application programming interface

Procedia PDF Downloads 199
1169 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.

Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards

Procedia PDF Downloads 116
1168 A Design of Elliptic Curve Cryptography Processor based on SM2 over GF(p)

Authors: Shiji Hu, Lei Li, Wanting Zhou, DaoHong Yang

Abstract:

The data encryption, is the foundation of today’s communication. On this basis, how to improve the speed of data encryption and decryption is always a problem that scholars work for. In this paper, we proposed an elliptic curve crypto processor architecture based on SM2 prime field. In terms of hardware implementation, we optimized the algorithms in different stages of the structure. In finite field modulo operation, we proposed an optimized improvement of Karatsuba-Ofman multiplication algorithm, and shorten the critical path through pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit wide data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between affine coordinate system and Jacobi projective coordinate system. In the parallel scheduling of point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU(dual-core ARM Cortex-A9).

Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.

Procedia PDF Downloads 73
1167 Comparing the ‘Urgent Community Care Team’ Clinical Referrals in the Community with Suggestions from the Clinical Decision Support Software Dem DX

Authors: R. Tariq, R. Lee

Abstract:

Background: Additional demands placed on senior clinical teams with ongoing COVID-19 management has accelerated the need to harness the wider healthcare professional resources and upskill them to take on greater clinical responsibility safely. The UK NHS Long Term Plan (2019)¹ emphasises the importance of expanding Advanced Practitioners’ (APs) roles to take on more clinical diagnostic responsibilities to cope with increased demand. In acute settings, APs are often the first point of care for patients and require training to take on initial triage responsibilities efficiently and safely. Critically, their roles include determining which onward services the patients may require, and assessing whether they can be treated at home, avoiding unnecessary admissions to the hospital. Dem Dx is a Clinical Reasoning Platform (CRP) that claims to help frontline healthcare professionals independently assess and triage patients. It guides the clinician from presenting complaints through associated symptoms to a running list of differential diagnoses, media, national and institutional guidelines. The objective of this study was to compare the clinical referral rates and guidelines adherence registered by the HMR Urgent Community Care Team (UCCT)² and Dem Dx recommendations using retrospective cases. Methodology: 192 cases seen by the UCCT were anonymised and reassessed using Dem Dx clinical pathways. We compared the UCCT’s performance with Dem Dx regarding the appropriateness of onward referrals. We also compared the clinical assessment regarding adherence to NICE guidelines recorded on the clinical notes and the presence of suitable guidance in each case. The cases were audited by two medical doctors. Results: Dem Dx demonstrated appropriate referrals in 85% of cases, compared to 47% in the UCCT team (p<0.001). Of particular note, Dem Dx demonstrated an almost 65% (p<0.001) improvement in the efficacy and appropriateness of referrals in a highly experienced clinical team. The effectiveness of Dem Dx is in part attributable to the relevant NICE and local guidelines found within the platform's pathways and was found to be suitable in 86% of cases. Conclusion: This study highlights the potential of clinical decision support, as Dem Dx, to improve the quality of onward clinical referrals delivered by a multidisciplinary team in primary care. It demonstrated that it could support healthcare professionals in making appropriate referrals, especially those that may be overlooked by providing suitable clinical guidelines directly embedded into cases and clear referral pathways. Further evaluation in the clinical setting has been planned to confirm those assumptions in a prospective study.

Keywords: advanced practitioner, clinical reasoning, clinical decision-making, management, multidisciplinary team, referrals, triage

Procedia PDF Downloads 130
1166 Reducing Pressure Drop in Microscale Channel Using Constructal Theory

Authors: K. X. Cheng, A. L. Goh, K. T. Ooi

Abstract:

The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.

Keywords: constructal theory, enhanced heat transfer, microchannel, pressure drop

Procedia PDF Downloads 312
1165 Using Geo-Statistical Techniques and Machine Learning Algorithms to Model the Spatiotemporal Heterogeneity of Land Surface Temperature and its Relationship with Land Use Land Cover

Authors: Javed Mallick

Abstract:

In metropolitan areas, rapid changes in land use and land cover (LULC) have ecological and environmental consequences. Saudi Arabia's cities have experienced tremendous urban growth since the 1990s, resulting in urban heat islands, groundwater depletion, air pollution, loss of ecosystem services, and so on. From 1990 to 2020, this study examines the variance and heterogeneity in land surface temperature (LST) caused by LULC changes in Abha-Khamis Mushyet, Saudi Arabia. LULC was mapped using the support vector machine (SVM). The mono-window algorithm was used to calculate the land surface temperature (LST). To identify LST clusters, the local indicator of spatial associations (LISA) model was applied to spatiotemporal LST maps. In addition, the parallel coordinate (PCP) method was used to investigate the relationship between LST clusters and urban biophysical variables as a proxy for LULC. According to LULC maps, urban areas increased by more than 330% between 1990 and 2018. Between 1990 and 2018, built-up areas had an 83.6% transitional probability. Furthermore, between 1990 and 2020, vegetation and agricultural land were converted into built-up areas at a rate of 17.9% and 21.8%, respectively. Uneven LULC changes in built-up areas result in more LST hotspots. LST hotspots were associated with high NDBI but not NDWI or NDVI. This study could assist policymakers in developing mitigation strategies for urban heat islands

Keywords: land use land cover mapping, land surface temperature, support vector machine, LISA model, parallel coordinate plot

Procedia PDF Downloads 55
1164 Towards a Conscious Design in AI by Overcoming Dark Patterns

Authors: Ayse Arslan

Abstract:

One of the important elements underpinning a conscious design is the degree of toxicity in communication. This study explores the mechanisms and strategies for identifying toxic content by avoiding dark patterns. Given the breadth of hate and harassment attacks, this study explores a threat model and taxonomy to assist in reasoning about strategies for detection, prevention, mitigation, and recovery. In addition to identifying some relevant techniques such as nudges, automatic detection, or human-ranking, the study suggests the use of major metrics such as the overhead and friction of solutions on platforms and users or balancing false positives (e.g., incorrectly penalizing legitimate users) against false negatives (e.g., users exposed to hate and harassment) to maintain a conscious design towards fairness.

Keywords: AI, ML, algorithms, policy, system design

Procedia PDF Downloads 102
1163 Application of Neural Petri Net to Electric Control System Fault Diagnosis

Authors: Sadiq J. Abou-Loukh

Abstract:

The present work deals with implementation of Petri nets, which own the perfect ability of modeling, are used to establish a fault diagnosis model. Fault diagnosis of a control system received considerable attention in the last decades. The formalism of representing neural networks based on Petri nets has been presented. Neural Petri Net (NPN) reasoning model is investigated and developed for the fault diagnosis process of electric control system. The proposed NPN has the characteristics of easy establishment and high efficiency, and fault status within the system can be described clearly when compared with traditional testing methods. The proposed system is tested and the simulation results are given. The implementation explains the advantages of using NPN method and can be used as a guide for different online applications.

Keywords: petri net, neural petri net, electric control system, fault diagnosis

Procedia PDF Downloads 447
1162 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: carry save adder Karatsuba multiplication, mid range Karatsuba multiplication, modified FFA and transposed filter, retiming

Procedia PDF Downloads 212
1161 Ontology-Based Representation of Islamic Rules to Perform Salah

Authors: Hamza Zafar, Quratulain Rajput

Abstract:

Salah (نماز ) is one of five pillars of Islam and obligatory for every Muslims. However, due to the lack of Islamic knowledge it might be very difficult for a layperson to perform it correctly. This paper presents an ontology based representation of Islamic rules to perform Salah. The Salah ontology has been built under the guidance of domain expert in light of Quran and Hadith. The ontology consists of basic concepts as well as relationship among concepts and constraints on them. The basic concepts include cleanness, body cover, Salah timing and steps to perform Salah. The SWRL rule language has been used to represent rule to determine whether the Salah performed correctly or it should be repeated. Finally, we evaluate the use of the Salat ontology through user’s example queries using SPARQL queries.

Keywords: prayer, salah, ontology, SPARQL queries, reasoning

Procedia PDF Downloads 390
1160 Comparative Analysis of Control Techniques Based Sliding Mode for Transient Stability Assessment for Synchronous Multicellular Converter

Authors: Rihab Hamdi, Amel Hadri Hamida, Fatiha Khelili, Sakina Zerouali, Ouafae Bennis

Abstract:

This paper features a comparative study performance of sliding mode controller (SMC) for closed-loop voltage control of direct current to direct current (DC-DC) three-cells buck converter connected in parallel, operating in continuous conduction mode (CCM), based on pulse-width modulation (PWM) with SMC based on hysteresis modulation (HM) where an adaptive feedforward technique is adopted. On one hand, for the PWM-based SM, the approach is to incorporate a fixed-frequency PWM scheme which is effectively a variant of SM control. On the other hand, for the HM-based SM, oncoming an adaptive feedforward control that makes the hysteresis band variable in the hysteresis modulator of the SM controller in the aim to restrict the switching frequency variation in the case of any change of the line input voltage or output load variation are introduced. The results obtained under load change, input change and reference change clearly demonstrates a similar dynamic response of both proposed techniques, their effectiveness is fast and smooth tracking of the desired output voltage. The PWM-based SM technique has greatly improved the dynamic behavior with a bit advantageous compared to the HM-based SM technique, as well as provide stability in any operating conditions. Simulation studies in MATLAB/Simulink environment have been performed to verify the concept.

Keywords: DC-DC converter, hysteresis modulation, parallel multi-cells converter, pulse-width modulation, robustness, sliding mode control

Procedia PDF Downloads 146
1159 Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer

Authors: Sergio Tovar-Pérez, Sebastian Töpfer, Markus Gräfe

Abstract:

The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer.

Keywords: quasi-parallel phase shifting, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 91
1158 6 DOF Cable-Driven Haptic Robot for Rendering High Axial Force with Low Off-Axis Impedance

Authors: Naghmeh Zamani, Ashkan Pourkand, David Grow

Abstract:

This paper presents the design and mechanical model of a hybrid impedance/admittance haptic device optimized for applications, like bone drilling, spinal awl probe use, and other surgical techniques were high force is required in the tool-axial direction, and low impedance is needed in all other directions. The performance levels required cannot be satisfied by existing, off-the-shelf haptic devices. This design may allow critical improvements in simulator fidelity for surgery training. The device consists primarily of two low-mass (carbon fiber) plates with a rod passing through them. Collectively, the device provides 6 DOF. The rod slides through a bushing in the top plate and it is connected to the bottom plate with a universal joint, constrained to move in only 2 DOF, allowing axial torque display the user’s hand. The two parallel plates are actuated and located by means of four cables pulled by motors. The forward kinematic equations are derived to ensure that the plates orientation remains constant. The corresponding equations are solved using the Newton-Raphson method. The static force/torque equations are also presented. Finally, we present the predicted distribution of location error, cables velocity, cable tension, force and torque for the device. These results and preliminary hardware fabrication indicate that this design may provide a revolutionary approach for haptic display of many surgical procedures by means of an architecture that allows arbitrary workspace scaling. Scaling of the height and width can be scaled arbitrarily.

Keywords: cable direct driven robot, haptics, parallel plates, bone drilling

Procedia PDF Downloads 241
1157 Computer Aided Assembly Attributes Retrieval Methods for Automated Assembly Sequence Generation

Authors: M. V. A. Raju Bahubalendruni, Bibhuti Bhusan Biswal, B. B. V. L. Deepak

Abstract:

Achieving an appropriate assembly sequence needs deep verification for its physical feasibility. For this purpose, industrial engineers use several assembly predicates; namely, liaison, geometric feasibility, stability and mechanical feasibility. However, testing an assembly sequence for these predicates requires huge assembly information. Extracting such assembly information from an assembled product is a time consuming and highly skillful task with complex reasoning methods. In this paper, computer aided methods are proposed to extract all the necessary assembly information from computer aided design (CAD) environment in order to perform the assembly sequence planning efficiently. These methods use preliminary capabilities of three-dimensional solid modelling and assembly modelling methods used in CAD software considering equilibrium laws of physical bodies.

Keywords: assembly automation, assembly attributes, assembly, CAD

Procedia PDF Downloads 278
1156 Psycho-social Antecedents of Goal Setting and Self-Control of Thai University Students

Authors: Duchduen Bhanthumnavin

Abstract:

One of the most important characteristics to increase competitive ability in undergraduate students after post COVID-19 era is goal setting and self-control. This correlational study aimes at investigating the influence of psycho-social antecedents on goal setting and self-control in 550 Thai university students. Results from multiple regression analysis revealed that the important predictors of this characteristic were reasoning ability, psychological immunity, attitudes toward competition, core self-evaluation, and family nurture, which yielded 54.28 predictive percentage in the total sample. Moreover, the analysis identified three at-risk groups, namely, male students, low GPA students, and students with siblings. Discussion and implications in general and for specific purposes for the at-risk groups were offered.

Keywords: antecedents, plan and self-control, predictors, university students

Procedia PDF Downloads 46
1155 An Institutional Analysis of IFRS Adoption in Poor Jurisdictions

Authors: Catalina Florentina Pricope

Abstract:

The last two decades witnessed a movement towards harmonization of international financial reporting standards (IFRS) throughout the global economy. This investigation seeks to identify the factors that could explain the adoption of IFRS by poor jurisdictions. While there has been a considerable amount, of literature published on the effects and key drivers of IFRS adoption in both developed and developing countries, little attention has been paid to jurisdictions with less developed capital markets and low-income levels exclusively. Drawing upon the Institutional Isomorphism theory and analyzing a sample of 45 poor jurisdictions between 2008 and 2013, the study empirically shows that poor jurisdictions are driven by legitimacy concerns rather than by economic reasoning to adopt an international accounting perspective. This in turn has implications for the IASB, as it should seek to influence institutional pressures within a particular jurisdiction in order to promote IFRS adoption.

Keywords: IFRS adoption, isomorphism, poor jurisdictions, accounting harmonization

Procedia PDF Downloads 252
1154 Heterogeneous Artifacts Construction for Software Evolution Control

Authors: Mounir Zekkaoui, Abdelhadi Fennan

Abstract:

The software evolution control requires a deep understanding of the changes and their impact on different system heterogeneous artifacts. And an understanding of descriptive knowledge of the developed software artifacts is a prerequisite condition for the success of the evolutionary process. The implementation of an evolutionary process is to make changes more or less important to many heterogeneous software artifacts such as source code, analysis and design models, unit testing, XML deployment descriptors, user guides, and others. These changes can be a source of degradation in functional, qualitative or behavioral terms of modified software. Hence the need for a unified approach for extraction and representation of different heterogeneous artifacts in order to ensure a unified and detailed description of heterogeneous software artifacts, exploitable by several software tools and allowing to responsible for the evolution of carry out the reasoning change concerned.

Keywords: heterogeneous software artifacts, software evolution control, unified approach, meta model, software architecture

Procedia PDF Downloads 416