Search results for: distributed computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2894

Search results for: distributed computing

2264 Acacia mearnsii De Wild-A New Scourge on Cork Oak Forests of El Kala National Park (North-Eastern Algeria)

Authors: Samir Chekchaki, ArifaBeddiar

Abstract:

Nowadays, more and more species are introduced outside their natural range. If most of them remain difficult, some may adopt a much more dynamic behavior. Indeed, we have witnessed in recent decades, the development of high forests of Acacia mearnsii in El Kala National Park. Introduced indefinitely, this leguminous intended to make money (nitrogen supply for industrial plantations of Eucalyptus), became one of the most invasive and more costly in terms of forest management. It has crossed all barriers: it has acclimatized, naturalized and then expanded through diverse landscapes; entry into competition with native species such as cork oak and altered ecosystem functioning. Therefore, it is interesting to analyze this new threat by relying on plants as bio-indicator for assessing biodiversity at different scales. We have identified the species present in several plots distributed in a range of vegetation types subjected to different degrees of disturbance by using the braun-blanquet method. Fifty-six species have been recorded. They are distributed in 48 genera and 29 families. The analysis of the relative frequency of species correlated with relative abundance clearly shows that the Acacia mearnsii feels marginalized. The ecological analysis of this biological invasion shows that disruption of either natural or anthropogenic origin (fire, prolonged drought, cut) represent the factors that exacerbate invasion by opening invasion windows. The lifting of seeds of Acacia mearnsii lasting physical dormancy (and variable) is ensured by the thermal shock in relation to its heliophilous character.

Keywords: Acacia mearnsii De Wild, El Kala National park, fire, invasive, vegetation

Procedia PDF Downloads 355
2263 Application of Griddization Management to Construction Hazard Management

Authors: Lingzhi Li, Jiankun Zhang, Tiantian Gu

Abstract:

Hazard management that can prevent fatal accidents and property losses is a fundamental process during the buildings’ construction stage. However, due to lack of safety supervision resources and operational pressures, the conduction of hazard management is poor and ineffective in China. In order to improve the quality of construction safety management, it is critical to explore the use of information technologies to ensure that the process of hazard management is efficient and effective. After exploring the existing problems of construction hazard management in China, this paper develops the griddization management model for construction hazard management. First, following the knowledge grid infrastructure, the griddization computing infrastructure for construction hazards management is designed which includes five layers: resource entity layer, information management layer, task management layer, knowledge transformation layer and application layer. This infrastructure will be as the technical support for realizing grid management. Second, this study divides the construction hazards into grids through city level, district level and construction site level according to grid principles. Last, a griddization management process including hazard identification, assessment and control is developed. Meanwhile, all stakeholders of construction safety management, such as owners, contractors, supervision organizations and government departments, should take the corresponding responsibilities in this process. Finally, a case study based on actual construction hazard identification, assessment and control is used to validate the effectiveness and efficiency of the proposed griddization management model. The advantage of this designed model is to realize information sharing and cooperative management between various safety management departments.

Keywords: construction hazard, griddization computing, grid management, process

Procedia PDF Downloads 273
2262 Cloud Resources Utilization and Science Teacher’s Effectiveness in Secondary Schools in Cross River State, Nigeria

Authors: Michael Udey Udam

Abstract:

Background: This study investigated the impact of cloud resources, a component of cloud computing, on science teachers’ effectiveness in secondary schools in Cross River State. Three (3) research questions and three (3) alternative hypotheses guided the study. Method: The descriptive survey design was adopted for the study. The population of the study comprised 1209 science teachers in public secondary schools of Cross River state. Sample: A sample of 487 teachers was drawn from the population using a stratified random sampling technique. The researcher-made structured questionnaire with 18 was used for data collection for the study. Research question one was answered using the Pearson Product Moment Correlation, while research question two and the hypotheses were answered using the Analysis of Variance (ANOVA) statistics in the Statistical Package for Social Sciences (SPSS) at a 0.05 level of significance. Results: The results of the study revealed that there is a positive correlation between the utilization of cloud resources in teaching and teaching effectiveness among science teachers in secondary schools in Cross River state; there is a negative correlation between gender and utilization of cloud resources among science teachers in secondary schools in Cross River state; and that there is a significant correlation between teaching experience and the utilization of cloud resources among science teachers in secondary schools in Cross River state. Conclusion: The study justifies the effectiveness of the Cross River state government policy of introducing cloud computing into the education sector. The study recommends that the policy should be sustained.

Keywords: cloud resources, science teachers, effectiveness, secondary school

Procedia PDF Downloads 69
2261 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 174
2260 Yawning Computing Using Bayesian Networks

Authors: Serge Tshibangu, Turgay Celik, Zenzo Ncube

Abstract:

Road crashes kill nearly over a million people every year, and leave millions more injured or permanently disabled. Various annual reports reveal that the percentage of fatal crashes due to fatigue/driver falling asleep comes directly after the percentage of fatal crashes due to intoxicated drivers. This percentage is higher than the combined percentage of fatal crashes due to illegal/Un-Safe U-turn and illegal/Un-Safe reversing. Although a relatively small percentage of police reports on road accidents highlights drowsiness and fatigue, the importance of these factors is greater than we might think, hidden by the undercounting of their events. Some scenarios show that these factors are significant in accidents with killed and injured people. Thus the need for an automatic drivers fatigue detection system in order to considerably reduce the number of accidents owing to fatigue.This research approaches the drivers fatigue detection problem in an innovative way by combining cues collected from both temporal analysis of drivers’ faces and environment. Monotony in driving environment is inter-related with visual symptoms of fatigue on drivers’ faces to achieve fatigue detection. Optical and infrared (IR) sensors are used to analyse the monotony in driving environment and to detect the visual symptoms of fatigue on human face. Internal cues from drivers faces and external cues from environment are combined together using machine learning algorithms to automatically detect fatigue.

Keywords: intelligent transportation systems, bayesian networks, yawning computing, machine learning algorithms

Procedia PDF Downloads 453
2259 Antibacterial Wound Dressing Based on Metal Nanoparticles Containing Cellulose Nanofibers

Authors: Mohamed Gouda

Abstract:

Antibacterial wound dressings based on cellulose nanofibers containing different metal nanoparticles (CMC-MNPs) were synthesized using an electrospinning technique. First, the composite of carboxymethyl cellulose containing different metal nanoparticles (CMC/MNPs), such as copper nanoparticles (CuNPs), iron nanoparticles (FeNPs), zinc nanoparticles (ZnNPs), cadmium nanoparticles (CdNPs) and cobalt nanoparticles (CoNPs) were synthesized, and finally, these composites were transferred to the electrospinning process. Synthesized CMC-MNPs were characterized using scanning electron microscopy (SEM) coupled with high-energy dispersive X-ray (EDX) and UV-visible spectroscopy used to confirm nanoparticle formation. The SEM images clearly showed regular flat shapes with semi-porous surfaces. All MNPs were well distributed inside the backbone of the cellulose without aggregation. The average particle diameters were 29-39 nm for ZnNPs, 29-33 nm for CdNPs, 25-33 nm for CoNPs, 23-27 nm for CuNPs and 22-26 nm for FeNPs. Surface morphology, water uptake and release of MNPs from the nanofibers in water and antimicrobial efficacy were studied. SEM images revealed that electrospun CMC-MNPs nanofibers are smooth and uniformly distributed without bead formation with average fiber diameters in the range of 300 to 450 nm. Fiber diameters were not affected by the presence of MNPs. TEM images showed that MNPs are present in/on the electrospun CMC-MNPs nanofibers. The diameter of the electrospun nanofibers containing MNPs was in the range of 300–450 nm. The MNPs were observed to be spherical in shape. The CMC-MNPs nanofibers showed good hydrophilic properties and had excellent antibacterial activity against the Gram-negative bacteria Escherichia coli and the Gram-positive bacteria Staphylococcus aureus.

Keywords: electrospinning technique, metal nanoparticles, cellulosic nanofibers, wound dressing

Procedia PDF Downloads 327
2258 FRATSAN: A New Software for Fractal Analysis of Signals

Authors: Hamidreza Namazi

Abstract:

Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.

Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure

Procedia PDF Downloads 465
2257 Effect of Depth on Texture Features of Ultrasound Images

Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes

Abstract:

In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.

Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering

Procedia PDF Downloads 291
2256 Cooperation of Unmanned Vehicles for Accomplishing Missions

Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin

Abstract:

The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.

Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning

Procedia PDF Downloads 126
2255 Functional Aspects of Carbonic Anhydrase

Authors: Bashistha Kumar Kanth, Seung Pil Pack

Abstract:

Carbonic anhydrase is ubiquitously distributed in organisms, and is fundamental to many eukaryotic biological processes such as photosynthesis, respiration, CO2 and ion transport, calcification and acid–base balance. However, CA occurs across the spectrum of prokaryotic metabolism in both the archaea and bacteria domains and many individual species contain more than one class. In this review, various roles of CA involved in cellular mechanism are presented to find out the CA functions applicable for industrial use.

Keywords: carbonic anhydrase, mechanism, CO2 sequestration, respiration

Procedia PDF Downloads 488
2254 Political Deprivations, Political Risk and the Extent of Skilled Labor Migration from Pakistan: Finding of a Time-Series Analysis

Authors: Syed Toqueer Akhter, Hussain Hamid

Abstract:

Over the last few decades an upward trend has been observed in the case of labor migration from Pakistan. The emigrants are not just economically motivated and in search of a safe living environment towards more developed countries in Europe, North America and Middle East. The opportunity cost of migration comes in the form of brain drain that is the loss of qualified and skilled human capital. Throughout the history of Pakistan, situations of political instability have emerged ranging from violation of political rights, political disappearances to political assassinations. Providing security to the citizens is a major issue faced in Pakistan due to increase in crime and terrorist activities. The aim of the study is to test the impact of political instability, appearing in the form of political terror, violation of political rights and civil liberty on skilled migration of labor. Three proxies are used to measure the political instability; political terror scale (based on a scale of 1-5, the political terror and violence that a country encounters in a particular year), political rights (a rating of 1-7, that describes political rights as the ability for the people to participate without restraint in political process) and civil liberty (a rating of 1-7, civil liberty is defined as the freedom of expression and rights without government intervention). Using time series data from 1980-2011, the distributed lag models were used for estimation because migration is not a onetime process, previous events and migration can lead to more migration. Our research clearly shows that political instability appearing in the form of political terror, political rights and civil liberty all appeared significant in explaining the extent of skilled migration of Pakistan.

Keywords: skilled labor migration, political terror, political rights, civil liberty, distributed lag model

Procedia PDF Downloads 1024
2253 Data-Driven Simulations Tools for Der and Battery Rich Power Grids

Authors: Ali Moradiamani, Samaneh Sadat Sajjadi, Mahdi Jalili

Abstract:

Power system analysis has been a major research topic in the generation and distribution sections, in both industry and academia, for a long time. Several load flow and fault analysis scenarios have been normally performed to study the performance of different parts of the grid in the context of, for example, voltage and frequency control. Software tools, such as PSCAD, PSSE, and PowerFactory DIgSILENT, have been developed to perform these analyses accurately. Distribution grid had been the passive part of the grid and had been known as the grid of consumers. However, a significant paradigm shift has happened with the emergence of Distributed Energy Resources (DERs) in the distribution level. It means that the concept of power system analysis needs to be extended to the distribution grid, especially considering self sufficient technologies such as microgrids. Compared to the generation and transmission levels, the distribution level includes significantly more generation/consumption nodes thanks to PV rooftop solar generation and battery energy storage systems. In addition, different consumption profile is expected from household residents resulting in a diverse set of scenarios. Emergence of electric vehicles will absolutely make the environment more complicated considering their charging (and possibly discharging) requirements. These complexities, as well as the large size of distribution grids, create challenges for the available power system analysis software. In this paper, we study the requirements of simulation tools in the distribution grid and how data-driven algorithms are required to increase the accuracy of the simulation results.

Keywords: smart grids, distributed energy resources, electric vehicles, battery storage systsms, simulation tools

Procedia PDF Downloads 100
2252 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy

Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan

Abstract:

For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.

Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue

Procedia PDF Downloads 361
2251 Improving Junior Doctor Induction Through the Use of Simple In-House Mobile Application

Authors: Dmitriy Chernov, Maria Karavassilis, Suhyoun Youn, Amna Izhar, Devasenan Devendra

Abstract:

Introduction and Background: A well-structured and comprehensive departmental induction improves patient safety and job satisfaction amongst doctors. The aims of our Project were as follows: 1. Assess the perceived preparedness of junior doctors starting their rotation in Acute Medicine at Watford General Hospital. 2. Develop a supplemental Induction Guide and Pocket reference in the form of an iOS mobile application. 3. To collect feedback after implementing the mobile application following a trial period of 8 weeks with a small cohort of junior doctors. Materials and Methods: A questionnaire was distributed to all new junior trainees starting in the department of Acute Medicine to assess their experience of current induction. A mobile Induction application was developed and trialled over a period of 8 weeks, distributed in addition to the existing didactic induction session. After the trial period, the same questionnaire was distributed to assess improvement in induction experience. Analytics data were collected with users’ consent to gauge user engagement and identify areas of improvement of the application. A feedback survey about the app was also distributed. Results: A total of 32 doctors used the application during the 8-week trial period. The application was accessed 7259 times in total, with the average user spending a cumulative of 37 minutes 22 seconds on the app. The most used section was Clinical Guidelines, accessed 1490 times. The App Feedback survey revealed positive reviews: 100% of participants (n=15/15) responded that the app improved their overall induction experience compared to other placements; 93% (n=14/15) responded that the app improved overall efficiency in completing daily ward jobs compared to previous rotations; and 93% (n=14/15) responded that the app improved patient safety overall. In the Pre-App and Post-App Induction Surveys, participants reported: a 48% improvement in awareness of practical aspects of the job; a 26% improvement of awareness on locating pathways and clinical guidelines; a 40% reduction of feelings of overwhelmingness. Conclusions and recommendations: This study demonstrates the importance of technology in Medical Education and Clinical Induction. The mobile application average engagement time equates to over 20 cumulative hours of on-the-job training delivered to each user, within an 8-week period. The most used and referred to section was clinical guidelines. This shows that there is high demand for an accessible pocket guide for this type of material. This simple mobile application resulted in a significant improvement in feedback about induction in our Department of Acute Medicine, and will likely impact workplace satisfaction. Limitations of the application include: post-app surveys had a small number of participants; the app is currently only available for iPhone users; some useful sections are nested deep within the app, lacks deep search functionality across all sections; lacks real time user feedback; and requires regular review and updates. Future steps for the app include: developing a web app, with an admin dashboard to simplify uploading and editing content; a comprehensive search functionality; and a user feedback and peer ratings system.

Keywords: mobile app, doctor induction, medical education, acute medicine

Procedia PDF Downloads 80
2250 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 57
2249 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm

Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy

Abstract:

IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.

Keywords: IoT, fog networks, data stewardship, dynamic access policy

Procedia PDF Downloads 57
2248 Impact of Economic Globalization on Ecological Footprint in India: Evidenced with Dynamic ARDL Simulations

Authors: Muhammed Ashiq Villanthenkodath, Shreya Pal

Abstract:

Purpose: This study scrutinizes the impact of economic globalization on ecological footprint while endogenizing economic growth and energy consumption from 1990 to 2018 in India. Design/methodology/approach: The standard unit root test has been employed for time series analysis to unveil the integration order. Then, the cointegration was confirmed using autoregressive distributed lag (ARDL) analysis. Further, the study executed the dynamic ARDL simulation model to estimate long-run and short-run results along with simulation and robotic prediction. Findings: The cointegration analysis confirms the existence of a long-run association among variables. Further, economic globalization reduces the ecological footprint in the long run. Similarly, energy consumption decreases the ecological footprint. In contrast, economic growth spurs the ecological footprint in India. Originality/value: This study contributes to the literature in many ways. First, unlike studies that employ CO2 emissions and globalization nexus, this study employs ecological footprint for measuring environmental quality; since it is the broader measure of environmental quality, it can offer a wide range of climate change mitigation policies for India. Second, the study executes a multivariate framework with updated series from 1990 to 2018 in India to explore the link between EF, economic globalization, energy consumption, and economic growth. Third, the dynamic autoregressive distributed lag (ARDL) model has been used to explore the short and long-run association between the series. Finally, to our limited knowledge, this is the first study that uses economic globalization in the EF function of India amid facing a trade-off between sustainable economic growth and the environment in the era of globalization.

Keywords: economic globalization, ecological footprint, India, dynamic ARDL simulation model

Procedia PDF Downloads 121
2247 Reliability Qualification Test Plan Derivation Method for Weibull Distributed Products

Authors: Ping Jiang, Yunyan Xing, Dian Zhang, Bo Guo

Abstract:

The reliability qualification test (RQT) is widely used in product development to qualify whether the product meets predetermined reliability requirements, which are mainly described in terms of reliability indices, for example, MTBF (Mean Time Between Failures). It is widely exercised in product development. In engineering practices, RQT plans are mandatorily referred to standards, such as MIL-STD-781 or GJB899A-2009. But these conventional RQT plans in standards are not preferred, as the test plans often require long test times or have high risks for both producer and consumer due to the fact that the methods in the standards only use the test data of the product itself. And the standards usually assume that the product is exponentially distributed, which is not suitable for a complex product other than electronics. So it is desirable to develop an RQT plan derivation method that safely shortens test time while keeping the two risks under control. To meet this end, for the product whose lifetime follows Weibull distribution, an RQT plan derivation method is developed. The merit of the method is that expert judgment is taken into account. This is implemented by applying the Bayesian method, which translates the expert judgment into prior information on product reliability. Then producer’s risk and the consumer’s risk are calculated accordingly. The procedures to derive RQT plans are also proposed in this paper. As extra information and expert judgment are added to the derivation, the derived test plans have the potential to shorten the required test time and have satisfactory low risks for both producer and consumer, compared with conventional test plans. A case study is provided to prove that when using expert judgment in deriving product test plans, the proposed method is capable of finding ideal test plans that not only reduce the two risks but also shorten the required test time as well.

Keywords: expert judgment, reliability qualification test, test plan derivation, producer’s risk, consumer’s risk

Procedia PDF Downloads 134
2246 Local Homology Modules

Authors: Fatemeh Mohammadi Aghjeh Mashhad

Abstract:

In this paper, we give several ways for computing generalized local homology modules by using Gorenstein flat resolutions. Also, we find some bounds for vanishing of generalized local homology modules.

Keywords: a-adic completion functor, generalized local homology modules, Gorenstein flat modules

Procedia PDF Downloads 414
2245 Heat Transfer and Diffusion Modelling

Authors: R. Whalley

Abstract:

The heat transfer modelling for a diffusion process will be considered. Difficulties in computing the time-distance dynamics of the representation will be addressed. Incomplete and irrational Laplace function will be identified as the computational issue. Alternative approaches to the response evaluation process will be provided. An illustration application problem will be presented. Graphical results confirming the theoretical procedures employed will be provided.

Keywords: heat, transfer, diffusion, modelling, computation

Procedia PDF Downloads 550
2244 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations

Authors: Deepak Singh, Rail Kuliev

Abstract:

The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.

Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization

Procedia PDF Downloads 65
2243 Bench-scale Evaluation of Alternative-to-Chlorination Disinfection Technologies for the Treatment of the Maltese Tap-water

Authors: Georgios Psakis, Imren Rahbay, David Spiteri, Jeanice Mallia, Martin Polidano, Vasilis P. Valdramidis

Abstract:

Absence of surface water and progressive groundwater quality deterioration have exacerbated scarcity rapidly, making the Mediterranean island of Malta one of the most water-stressed countries in Europe. Water scarcity challenges have been addressed by reverse osmosis desalination of seawater, 60% of which is blended with groundwater to form the current potable tap-water supply. Chlorination has been the adopted method of water disinfection prior to distribution. However, with the Malteseconsumer chlorine sensory-threshold being as low as 0.34 ppm, presence of chorine residuals and chlorination by-products in the distributed tap-water impacts negatively on its organoleptic attributes, deterring the public from consuming it. As part of the PURILMA initiative, and with the aim of minimizing the impact of chlorine residual on the quality of the distributed water, UV-C, and hydrosonication, have been identified as cost- and energy-effective decontamination alternatives, paving the way for more sustainable water management. Bench-scale assessment of the decontamination efficiency of UV-C (254 nm), revealed 4.7-Log10 inactivation for both Escherichia coli and Enterococcus faecalis at 36 mJ/cm2. At >200 mJ/cm2fluence rates, there was a systematic 2-Log10 difference in the reductions exhibited by E. coli and E. faecalis to suggest that UV-C disinfection was more effective against E. coli. Hybrid treatment schemes involving hydrosonication(at 9.5 and 12.5 dm3/min flow rates with 1-5 MPa maximum pressure) and UV-C showed at least 1.1-fold greater bactericidal activity relative to the individualized UV-C treatments. The observed inactivation appeared to have stemmed from additive effects of the combined treatments, with hydrosonication-generated reactive oxygen species enhancing the biocidal activity of UV-C.

Keywords: disinfection, groundwater, hydrosonication, UV-C

Procedia PDF Downloads 168
2242 Power Energy Management For A Grid-Connected PV System Using Rule-Base Fuzzy Logic

Authors: Nousheen Hashmi, Shoab Ahmad Khan

Abstract:

Active collaboration among the green energy sources and the load demand leads to serious issues related to power quality and stability. The growing number of green energy resources and Distributed-Generators need newer strategies to be incorporated for their operations to keep the power energy stability among green energy resources and micro-grid/Utility Grid. This paper presents a novel technique for energy power management in Grid-Connected Photovoltaic with energy storage system under set of constraints including weather conditions, Load Shedding Hours, Peak pricing Hours by using rule-based fuzzy smart grid controller to schedule power coming from multiple Power sources (photovoltaic, grid, battery) under the above set of constraints. The technique fuzzifies all the inputs and establishes fuzzify rule set from fuzzy outputs before defuzzification. Simulations are run for 24 hours period and rule base power scheduler is developed. The proposed fuzzy controller control strategy is able to sense the continuous fluctuations in Photovoltaic power generation, Load Demands, Grid (load Shedding patterns) and Battery State of Charge in order to make correct and quick decisions.The suggested Fuzzy Rule-based scheduler can operate well with vague inputs thus doesn’t not require any exact numerical model and can handle nonlinearity. This technique provides a framework for the extension to handle multiple special cases for optimized working of the system.

Keywords: photovoltaic, power, fuzzy logic, distributed generators, state of charge, load shedding, membership functions

Procedia PDF Downloads 476
2241 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa

Authors: Davies Obaromi, Qin Yongsong, James Ndege

Abstract:

To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.

Keywords: linear, biharmonic splines, tuberculosis, South Africa

Procedia PDF Downloads 237
2240 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 187
2239 Lightweight and Seamless Distributed Scheme for the Smart Home

Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro

Abstract:

Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.

Keywords: authentication, key-session, security, wireless sensors

Procedia PDF Downloads 316
2238 Methods for Solving Identification Problems

Authors: Fadi Awawdeh

Abstract:

In this work, we highlight the key concepts in using semigroup theory as a methodology used to construct efficient formulas for solving inverse problems. The proposed method depends on some results concerning integral equations. The experimental results show the potential and limitations of the method and imply directions for future work.

Keywords: identification problems, semigroup theory, methods for inverse problems, scientific computing

Procedia PDF Downloads 476
2237 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks

Authors: Zeyad Abdelmageid, Xianbin Wang

Abstract:

Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterward. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed, and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due to the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With the proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and, at times, better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.

Keywords: channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead

Procedia PDF Downloads 114
2236 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys

Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo

Abstract:

Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.

Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations

Procedia PDF Downloads 56
2235 ABET Accreditation Process for Engineering and Technology Programs: Detailed Process Flow from Criteria 1 to Criteria 8

Authors: Amit Kumar, Rajdeep Chakrabarty, Ganesh Gupta

Abstract:

This paper illustrates the detailed accreditation process of Accreditation Board of Engineering and Technology (ABET) for accrediting engineering and Technology programs. ABET is a non-governmental agency that accredits engineering and technology, applied and natural sciences, and computing sciences programs. ABET was founded on 10th May 1932 and was founded by Institute of Electrical and Electronics Engineering. International industries accept ABET accredited institutes having the highest standards in their academic programs. In this accreditation, there are eight criteria in general; criterion 1 describes the student outcome evaluations, criteria 2 measures the program's educational objectives, criteria 3 is the student outcome calculated from the marks obtained by students, criteria 4 establishes continuous improvement, criteria 5 focus on curriculum of the institute, criteria 6 is about faculties of this institute, criteria 7 measures the facilities provided by the institute and finally, criteria 8 focus on institutional support towards staff of the institute. In this paper, we focused on the calculative part of each criterion with equations and suitable examples, the files and documentation required for each criterion, and the total workflow of the process. The references and the values used to illustrate the calculations are all taken from the samples provided at ABET's official website. In the final section, we also discuss the criterion-wise score weightage followed by evaluation with timeframe and deadlines.

Keywords: Engineering Accreditation Committee, Computing Accreditation Committee, performance indicator, Program Educational Objective, ABET Criterion 1 to 7, IEEE, National Board of Accreditation, MOOCS, Board of Studies, stakeholders, course objective, program outcome, articulation, attainment, CO-PO mapping, CO-PO-SO mapping, PDCA cycle, degree certificates, course files, course catalogue

Procedia PDF Downloads 56