Search results for: distributed ontologies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2014

Search results for: distributed ontologies

1684 Global Historical Distribution Range of Brown Bear (Ursus Arctos)

Authors: Tariq Mahmood, Faiza Lehrasab, Faraz Akrim, Muhammad Sajid nadeem, Muhammad Mushtaq, Unza waqar, Ayesha Sheraz, Shaista Andleeb

Abstract:

Brown bear (Ursus arctos), a member of the family Ursidae, is distributed in a wide range of habitats in North America, Europe and Asia. Suspectedly, the global distribution range of brown bears is decreasing at the moment due to various factors. The carnivore species is categorized as ‘Least Concern’ globally by the IUCN Red List of Threatened Species. However, there are some fragmented, small populations that are on the verge of extinction, as is in Pakistan, where the species is listed as ‘Critically Endangered’, with a declining population trend. Importantly, the global historical distribution range of brown bears is undocumented. Therefore, in the current study, we reconstructed and estimated the historical distribution range of brown bears using QGIS software and also analyzed the network of protected areas in the past and current ranges of the species. Results showed that brown bear was more widely distributed in historic times, encompassing 52.6 million km² area as compared to their current distribution of 38.8 million km², resulting in a total range contraction of up to approximately 28 %. In the past, a total of N = 62,234 protected Areas, covering approximately 3.89 million km² were present in the distribution range of the species, while now a total of N= 33,313 Protected Areas, covering approximately 2.75 million km² area, are present in the current distribution range of the brown bear. The brown bear distribution range in the protected areas has also contracted by 1.15 million km² and the total percentage reduction of PAs is 29%.

Keywords: brown bear, historic distribution, range contraction, protected areas

Procedia PDF Downloads 23
1683 Acacia mearnsii De Wild-A New Scourge on Cork Oak Forests of El Kala National Park (North-Eastern Algeria)

Authors: Samir Chekchaki, ArifaBeddiar

Abstract:

Nowadays, more and more species are introduced outside their natural range. If most of them remain difficult, some may adopt a much more dynamic behavior. Indeed, we have witnessed in recent decades, the development of high forests of Acacia mearnsii in El Kala National Park. Introduced indefinitely, this leguminous intended to make money (nitrogen supply for industrial plantations of Eucalyptus), became one of the most invasive and more costly in terms of forest management. It has crossed all barriers: it has acclimatized, naturalized and then expanded through diverse landscapes; entry into competition with native species such as cork oak and altered ecosystem functioning. Therefore, it is interesting to analyze this new threat by relying on plants as bio-indicator for assessing biodiversity at different scales. We have identified the species present in several plots distributed in a range of vegetation types subjected to different degrees of disturbance by using the braun-blanquet method. Fifty-six species have been recorded. They are distributed in 48 genera and 29 families. The analysis of the relative frequency of species correlated with relative abundance clearly shows that the Acacia mearnsii feels marginalized. The ecological analysis of this biological invasion shows that disruption of either natural or anthropogenic origin (fire, prolonged drought, cut) represent the factors that exacerbate invasion by opening invasion windows. The lifting of seeds of Acacia mearnsii lasting physical dormancy (and variable) is ensured by the thermal shock in relation to its heliophilous character.

Keywords: Acacia mearnsii De Wild, El Kala National park, fire, invasive, vegetation

Procedia PDF Downloads 337
1682 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 427
1681 Ontology based Fault Detection and Diagnosis system Querying and Reasoning examples

Authors: Marko Batic, Nikola Tomasevic, Sanja Vranes

Abstract:

One of the strongholds in the ubiquitous efforts related to the energy conservation and energy efficiency improvement is represented by the retrofit of high energy consumers in buildings. In general, HVAC systems represent the highest energy consumers in buildings. However they usually suffer from mal-operation and/or malfunction, causing even higher energy consumption than necessary. Various Fault Detection and Diagnosis (FDD) systems can be successfully employed for this purpose, especially when it comes to the application at a single device/unit level. In the case of more complex systems, where multiple devices are operating in the context of the same building, significant energy efficiency improvements can only be achieved through application of comprehensive FDD systems relying on additional higher level knowledge, such as their geographical location, served area, their intra- and inter- system dependencies etc. This paper presents a comprehensive FDD system that relies on the utilization of common knowledge repository that stores all critical information. The discussed system is deployed as a test-bed platform at the two at Fiumicino and Malpensa airports in Italy. This paper aims at presenting advantages of implementation of the knowledge base through the utilization of ontology and offers improved functionalities of such system through examples of typical queries and reasoning that enable derivation of high level energy conservation measures (ECM). Therefore, key SPARQL queries and SWRL rules, based on the two instantiated airport ontologies, are elaborated. The detection of high level irregularities in the operation of airport heating/cooling plants is discussed and estimation of energy savings is reported.

Keywords: airport ontology, knowledge management, ontology modeling, reasoning

Procedia PDF Downloads 507
1680 Antibacterial Wound Dressing Based on Metal Nanoparticles Containing Cellulose Nanofibers

Authors: Mohamed Gouda

Abstract:

Antibacterial wound dressings based on cellulose nanofibers containing different metal nanoparticles (CMC-MNPs) were synthesized using an electrospinning technique. First, the composite of carboxymethyl cellulose containing different metal nanoparticles (CMC/MNPs), such as copper nanoparticles (CuNPs), iron nanoparticles (FeNPs), zinc nanoparticles (ZnNPs), cadmium nanoparticles (CdNPs) and cobalt nanoparticles (CoNPs) were synthesized, and finally, these composites were transferred to the electrospinning process. Synthesized CMC-MNPs were characterized using scanning electron microscopy (SEM) coupled with high-energy dispersive X-ray (EDX) and UV-visible spectroscopy used to confirm nanoparticle formation. The SEM images clearly showed regular flat shapes with semi-porous surfaces. All MNPs were well distributed inside the backbone of the cellulose without aggregation. The average particle diameters were 29-39 nm for ZnNPs, 29-33 nm for CdNPs, 25-33 nm for CoNPs, 23-27 nm for CuNPs and 22-26 nm for FeNPs. Surface morphology, water uptake and release of MNPs from the nanofibers in water and antimicrobial efficacy were studied. SEM images revealed that electrospun CMC-MNPs nanofibers are smooth and uniformly distributed without bead formation with average fiber diameters in the range of 300 to 450 nm. Fiber diameters were not affected by the presence of MNPs. TEM images showed that MNPs are present in/on the electrospun CMC-MNPs nanofibers. The diameter of the electrospun nanofibers containing MNPs was in the range of 300–450 nm. The MNPs were observed to be spherical in shape. The CMC-MNPs nanofibers showed good hydrophilic properties and had excellent antibacterial activity against the Gram-negative bacteria Escherichia coli and the Gram-positive bacteria Staphylococcus aureus.

Keywords: electrospinning technique, metal nanoparticles, cellulosic nanofibers, wound dressing

Procedia PDF Downloads 310
1679 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights

Authors: Julian Wise

Abstract:

Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.

Keywords: mineral technology, big data, machine learning operations, data lake

Procedia PDF Downloads 91
1678 Effect of Depth on Texture Features of Ultrasound Images

Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes

Abstract:

In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.

Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering

Procedia PDF Downloads 269
1677 Beyond Baudrillard: A Critical Intersection between Semiotics and Materialism

Authors: Francesco Piluso

Abstract:

Nowadays, to restore the deconstructive power of semiotics implies a critical analysis of neoliberal ideology, and, even more critically, a confrontation with materialist perspective. The theoretical path of Jean Baudrillard is crucial to understand the ambivalence of this intersection. A semiotic critique of Baudrillard’s work, through tools of both structuralism and interpretative semiotics, has the aim to give materialism a new consistent semiotic approach and vice-versa. According to Baudrillard, the commodity form is characterized by the same abstract and systemic logic of the sign-form, in which the production of the signified (use-value) is a mere ideological mean for the reproduction of the signifiers-chain (exchange-value). Nevertheless, this parallelism is broken by the author himself: if the use-value is deconstructed in its relative logic, the signified and the referent, both as discrete and positive elements, are collapsed on the same plane at the shadows of the signified forms. These divergent considerations lead Baudrillard to the same crucial point: the dismissal of the material world, replaced by the hyperreality as reproduction of a semiotic (genetic) Code. The stress on the concept of form, as an epistemological and semiotic tool to analyse the construction of values in the consumer society, has led to the Code as its ontological drift. In other words, Baudrillard seems to enclose consumer society (and reality) in this immanent and self-fetishized world of signs–an ideological perspective that mystifies the gravity of the material relationships between Northern-Western World and Third World. The notion of Encyclopaedia by Umberto Eco is the key to overturn the relationship of immanence/transcendence between the Code and the economic political of the sign, by understanding the former as an ideological plane within the encyclopedia itself. Therefore, rather than building semiotic (hyper)realities, semiotics has to deal with materialism in terms of material relationships of power which are mystified and reproduced through such ideological ontologies of signs.

Keywords: Baudrillard, Code, Eco, Encyclopaedia, epistemology vs. ontology, semiotics vs. materialism

Procedia PDF Downloads 137
1676 Functional Aspects of Carbonic Anhydrase

Authors: Bashistha Kumar Kanth, Seung Pil Pack

Abstract:

Carbonic anhydrase is ubiquitously distributed in organisms, and is fundamental to many eukaryotic biological processes such as photosynthesis, respiration, CO2 and ion transport, calcification and acid–base balance. However, CA occurs across the spectrum of prokaryotic metabolism in both the archaea and bacteria domains and many individual species contain more than one class. In this review, various roles of CA involved in cellular mechanism are presented to find out the CA functions applicable for industrial use.

Keywords: carbonic anhydrase, mechanism, CO2 sequestration, respiration

Procedia PDF Downloads 470
1675 Political Deprivations, Political Risk and the Extent of Skilled Labor Migration from Pakistan: Finding of a Time-Series Analysis

Authors: Syed Toqueer Akhter, Hussain Hamid

Abstract:

Over the last few decades an upward trend has been observed in the case of labor migration from Pakistan. The emigrants are not just economically motivated and in search of a safe living environment towards more developed countries in Europe, North America and Middle East. The opportunity cost of migration comes in the form of brain drain that is the loss of qualified and skilled human capital. Throughout the history of Pakistan, situations of political instability have emerged ranging from violation of political rights, political disappearances to political assassinations. Providing security to the citizens is a major issue faced in Pakistan due to increase in crime and terrorist activities. The aim of the study is to test the impact of political instability, appearing in the form of political terror, violation of political rights and civil liberty on skilled migration of labor. Three proxies are used to measure the political instability; political terror scale (based on a scale of 1-5, the political terror and violence that a country encounters in a particular year), political rights (a rating of 1-7, that describes political rights as the ability for the people to participate without restraint in political process) and civil liberty (a rating of 1-7, civil liberty is defined as the freedom of expression and rights without government intervention). Using time series data from 1980-2011, the distributed lag models were used for estimation because migration is not a onetime process, previous events and migration can lead to more migration. Our research clearly shows that political instability appearing in the form of political terror, political rights and civil liberty all appeared significant in explaining the extent of skilled migration of Pakistan.

Keywords: skilled labor migration, political terror, political rights, civil liberty, distributed lag model

Procedia PDF Downloads 1005
1674 Data-Driven Simulations Tools for Der and Battery Rich Power Grids

Authors: Ali Moradiamani, Samaneh Sadat Sajjadi, Mahdi Jalili

Abstract:

Power system analysis has been a major research topic in the generation and distribution sections, in both industry and academia, for a long time. Several load flow and fault analysis scenarios have been normally performed to study the performance of different parts of the grid in the context of, for example, voltage and frequency control. Software tools, such as PSCAD, PSSE, and PowerFactory DIgSILENT, have been developed to perform these analyses accurately. Distribution grid had been the passive part of the grid and had been known as the grid of consumers. However, a significant paradigm shift has happened with the emergence of Distributed Energy Resources (DERs) in the distribution level. It means that the concept of power system analysis needs to be extended to the distribution grid, especially considering self sufficient technologies such as microgrids. Compared to the generation and transmission levels, the distribution level includes significantly more generation/consumption nodes thanks to PV rooftop solar generation and battery energy storage systems. In addition, different consumption profile is expected from household residents resulting in a diverse set of scenarios. Emergence of electric vehicles will absolutely make the environment more complicated considering their charging (and possibly discharging) requirements. These complexities, as well as the large size of distribution grids, create challenges for the available power system analysis software. In this paper, we study the requirements of simulation tools in the distribution grid and how data-driven algorithms are required to increase the accuracy of the simulation results.

Keywords: smart grids, distributed energy resources, electric vehicles, battery storage systsms, simulation tools

Procedia PDF Downloads 82
1673 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 77
1672 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy

Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan

Abstract:

For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.

Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue

Procedia PDF Downloads 344
1671 Improving Junior Doctor Induction Through the Use of Simple In-House Mobile Application

Authors: Dmitriy Chernov, Maria Karavassilis, Suhyoun Youn, Amna Izhar, Devasenan Devendra

Abstract:

Introduction and Background: A well-structured and comprehensive departmental induction improves patient safety and job satisfaction amongst doctors. The aims of our Project were as follows: 1. Assess the perceived preparedness of junior doctors starting their rotation in Acute Medicine at Watford General Hospital. 2. Develop a supplemental Induction Guide and Pocket reference in the form of an iOS mobile application. 3. To collect feedback after implementing the mobile application following a trial period of 8 weeks with a small cohort of junior doctors. Materials and Methods: A questionnaire was distributed to all new junior trainees starting in the department of Acute Medicine to assess their experience of current induction. A mobile Induction application was developed and trialled over a period of 8 weeks, distributed in addition to the existing didactic induction session. After the trial period, the same questionnaire was distributed to assess improvement in induction experience. Analytics data were collected with users’ consent to gauge user engagement and identify areas of improvement of the application. A feedback survey about the app was also distributed. Results: A total of 32 doctors used the application during the 8-week trial period. The application was accessed 7259 times in total, with the average user spending a cumulative of 37 minutes 22 seconds on the app. The most used section was Clinical Guidelines, accessed 1490 times. The App Feedback survey revealed positive reviews: 100% of participants (n=15/15) responded that the app improved their overall induction experience compared to other placements; 93% (n=14/15) responded that the app improved overall efficiency in completing daily ward jobs compared to previous rotations; and 93% (n=14/15) responded that the app improved patient safety overall. In the Pre-App and Post-App Induction Surveys, participants reported: a 48% improvement in awareness of practical aspects of the job; a 26% improvement of awareness on locating pathways and clinical guidelines; a 40% reduction of feelings of overwhelmingness. Conclusions and recommendations: This study demonstrates the importance of technology in Medical Education and Clinical Induction. The mobile application average engagement time equates to over 20 cumulative hours of on-the-job training delivered to each user, within an 8-week period. The most used and referred to section was clinical guidelines. This shows that there is high demand for an accessible pocket guide for this type of material. This simple mobile application resulted in a significant improvement in feedback about induction in our Department of Acute Medicine, and will likely impact workplace satisfaction. Limitations of the application include: post-app surveys had a small number of participants; the app is currently only available for iPhone users; some useful sections are nested deep within the app, lacks deep search functionality across all sections; lacks real time user feedback; and requires regular review and updates. Future steps for the app include: developing a web app, with an admin dashboard to simplify uploading and editing content; a comprehensive search functionality; and a user feedback and peer ratings system.

Keywords: mobile app, doctor induction, medical education, acute medicine

Procedia PDF Downloads 70
1670 Impact of Economic Globalization on Ecological Footprint in India: Evidenced with Dynamic ARDL Simulations

Authors: Muhammed Ashiq Villanthenkodath, Shreya Pal

Abstract:

Purpose: This study scrutinizes the impact of economic globalization on ecological footprint while endogenizing economic growth and energy consumption from 1990 to 2018 in India. Design/methodology/approach: The standard unit root test has been employed for time series analysis to unveil the integration order. Then, the cointegration was confirmed using autoregressive distributed lag (ARDL) analysis. Further, the study executed the dynamic ARDL simulation model to estimate long-run and short-run results along with simulation and robotic prediction. Findings: The cointegration analysis confirms the existence of a long-run association among variables. Further, economic globalization reduces the ecological footprint in the long run. Similarly, energy consumption decreases the ecological footprint. In contrast, economic growth spurs the ecological footprint in India. Originality/value: This study contributes to the literature in many ways. First, unlike studies that employ CO2 emissions and globalization nexus, this study employs ecological footprint for measuring environmental quality; since it is the broader measure of environmental quality, it can offer a wide range of climate change mitigation policies for India. Second, the study executes a multivariate framework with updated series from 1990 to 2018 in India to explore the link between EF, economic globalization, energy consumption, and economic growth. Third, the dynamic autoregressive distributed lag (ARDL) model has been used to explore the short and long-run association between the series. Finally, to our limited knowledge, this is the first study that uses economic globalization in the EF function of India amid facing a trade-off between sustainable economic growth and the environment in the era of globalization.

Keywords: economic globalization, ecological footprint, India, dynamic ARDL simulation model

Procedia PDF Downloads 104
1669 Reliability Qualification Test Plan Derivation Method for Weibull Distributed Products

Authors: Ping Jiang, Yunyan Xing, Dian Zhang, Bo Guo

Abstract:

The reliability qualification test (RQT) is widely used in product development to qualify whether the product meets predetermined reliability requirements, which are mainly described in terms of reliability indices, for example, MTBF (Mean Time Between Failures). It is widely exercised in product development. In engineering practices, RQT plans are mandatorily referred to standards, such as MIL-STD-781 or GJB899A-2009. But these conventional RQT plans in standards are not preferred, as the test plans often require long test times or have high risks for both producer and consumer due to the fact that the methods in the standards only use the test data of the product itself. And the standards usually assume that the product is exponentially distributed, which is not suitable for a complex product other than electronics. So it is desirable to develop an RQT plan derivation method that safely shortens test time while keeping the two risks under control. To meet this end, for the product whose lifetime follows Weibull distribution, an RQT plan derivation method is developed. The merit of the method is that expert judgment is taken into account. This is implemented by applying the Bayesian method, which translates the expert judgment into prior information on product reliability. Then producer’s risk and the consumer’s risk are calculated accordingly. The procedures to derive RQT plans are also proposed in this paper. As extra information and expert judgment are added to the derivation, the derived test plans have the potential to shorten the required test time and have satisfactory low risks for both producer and consumer, compared with conventional test plans. A case study is provided to prove that when using expert judgment in deriving product test plans, the proposed method is capable of finding ideal test plans that not only reduce the two risks but also shorten the required test time as well.

Keywords: expert judgment, reliability qualification test, test plan derivation, producer’s risk, consumer’s risk

Procedia PDF Downloads 109
1668 Bench-scale Evaluation of Alternative-to-Chlorination Disinfection Technologies for the Treatment of the Maltese Tap-water

Authors: Georgios Psakis, Imren Rahbay, David Spiteri, Jeanice Mallia, Martin Polidano, Vasilis P. Valdramidis

Abstract:

Absence of surface water and progressive groundwater quality deterioration have exacerbated scarcity rapidly, making the Mediterranean island of Malta one of the most water-stressed countries in Europe. Water scarcity challenges have been addressed by reverse osmosis desalination of seawater, 60% of which is blended with groundwater to form the current potable tap-water supply. Chlorination has been the adopted method of water disinfection prior to distribution. However, with the Malteseconsumer chlorine sensory-threshold being as low as 0.34 ppm, presence of chorine residuals and chlorination by-products in the distributed tap-water impacts negatively on its organoleptic attributes, deterring the public from consuming it. As part of the PURILMA initiative, and with the aim of minimizing the impact of chlorine residual on the quality of the distributed water, UV-C, and hydrosonication, have been identified as cost- and energy-effective decontamination alternatives, paving the way for more sustainable water management. Bench-scale assessment of the decontamination efficiency of UV-C (254 nm), revealed 4.7-Log10 inactivation for both Escherichia coli and Enterococcus faecalis at 36 mJ/cm2. At >200 mJ/cm2fluence rates, there was a systematic 2-Log10 difference in the reductions exhibited by E. coli and E. faecalis to suggest that UV-C disinfection was more effective against E. coli. Hybrid treatment schemes involving hydrosonication(at 9.5 and 12.5 dm3/min flow rates with 1-5 MPa maximum pressure) and UV-C showed at least 1.1-fold greater bactericidal activity relative to the individualized UV-C treatments. The observed inactivation appeared to have stemmed from additive effects of the combined treatments, with hydrosonication-generated reactive oxygen species enhancing the biocidal activity of UV-C.

Keywords: disinfection, groundwater, hydrosonication, UV-C

Procedia PDF Downloads 145
1667 Power Energy Management For A Grid-Connected PV System Using Rule-Base Fuzzy Logic

Authors: Nousheen Hashmi, Shoab Ahmad Khan

Abstract:

Active collaboration among the green energy sources and the load demand leads to serious issues related to power quality and stability. The growing number of green energy resources and Distributed-Generators need newer strategies to be incorporated for their operations to keep the power energy stability among green energy resources and micro-grid/Utility Grid. This paper presents a novel technique for energy power management in Grid-Connected Photovoltaic with energy storage system under set of constraints including weather conditions, Load Shedding Hours, Peak pricing Hours by using rule-based fuzzy smart grid controller to schedule power coming from multiple Power sources (photovoltaic, grid, battery) under the above set of constraints. The technique fuzzifies all the inputs and establishes fuzzify rule set from fuzzy outputs before defuzzification. Simulations are run for 24 hours period and rule base power scheduler is developed. The proposed fuzzy controller control strategy is able to sense the continuous fluctuations in Photovoltaic power generation, Load Demands, Grid (load Shedding patterns) and Battery State of Charge in order to make correct and quick decisions.The suggested Fuzzy Rule-based scheduler can operate well with vague inputs thus doesn’t not require any exact numerical model and can handle nonlinearity. This technique provides a framework for the extension to handle multiple special cases for optimized working of the system.

Keywords: photovoltaic, power, fuzzy logic, distributed generators, state of charge, load shedding, membership functions

Procedia PDF Downloads 462
1666 Comparison of Blockchain Ecosystem for Identity Management

Authors: K. S. Suganya, R. Nedunchezhian

Abstract:

In recent years, blockchain technology has been found to be the most significant discovery in this digital era, after the discovery of the Internet and Cloud Computing. Blockchain is a simple, distributed public ledger that contains all the user’s transaction details in a block. The global copy of the block is then shared among all its peer-peer network users after validation by the Blockchain miners. Once a block is validated and accepted, it cannot be altered by any users making it a trust-free transaction. It also resolves the problem of double-spending by using traditional cryptographic methods. Since the advent of bitcoin, blockchain has been the backbone for all its transactions. But in recent years, it has found its roots and uses in many fields like Smart Contracts, Smart City management, healthcare, etc. Identity management against digital identity theft has become a major concern among financial and other organizations. To solve this digital identity theft, blockchain technology can be employed with existing identity management systems, which maintain a distributed public ledger containing details of an individual’s identity containing information such as Digital birth certificates, Citizenship number, Bank details, voter details, driving license in the form of blocks verified on the blockchain becomes time-stamped, unforgeable and publicly visible for any legitimate users. The main challenge in using blockchain technology to prevent digital identity theft is ensuring the pseudo-anonymity and privacy of the users. This survey paper will exert to study the blockchain concepts, consensus protocols, and various blockchain-based Digital Identity Management systems with their research scope. This paper also discusses the role of Blockchain in COVID-19 pandemic management by self-sovereign identity and supply chain management.

Keywords: blockchain, consensus protocols, bitcoin, identity theft, digital identity management, pandemic, COVID-19, self-sovereign identity

Procedia PDF Downloads 103
1665 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa

Authors: Davies Obaromi, Qin Yongsong, James Ndege

Abstract:

To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.

Keywords: linear, biharmonic splines, tuberculosis, South Africa

Procedia PDF Downloads 219
1664 Lightweight and Seamless Distributed Scheme for the Smart Home

Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro

Abstract:

Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.

Keywords: authentication, key-session, security, wireless sensors

Procedia PDF Downloads 297
1663 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks

Authors: Zeyad Abdelmageid, Xianbin Wang

Abstract:

Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterward. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed, and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due to the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With the proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and, at times, better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.

Keywords: channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead

Procedia PDF Downloads 94
1662 Adaptive Certificate-Based Mutual Authentication Protocol for Mobile Grid Infrastructure

Authors: H. Parveen Begam, M. A. Maluk Mohamed

Abstract:

Mobile Grid Computing is an environment that allows sharing and coordinated use of diverse resources in dynamic, heterogeneous and distributed environment using different types of electronic portable devices. In a grid environment the security issues are like authentication, authorization, message protection and delegation handled by GSI (Grid Security Infrastructure). Proving better security between mobile devices and grid infrastructure is a major issue, because of the open nature of wireless networks, heterogeneous and distributed environments. In a mobile grid environment, the individual computing devices may be resource-limited in isolation, as an aggregated sum, they have the potential to play a vital role within the mobile grid environment. Some adaptive methodology or solution is needed to solve the issues like authentication of a base station, security of information flowing between a mobile user and a base station, prevention of attacks within a base station, hand-over of authentication information, communication cost of establishing a session key between mobile user and base station, computing complexity of achieving authenticity and security. The sharing of resources of the devices can be achieved only through the trusted relationships between the mobile hosts (MHs). Before accessing the grid service, the mobile devices should be proven authentic. This paper proposes the dynamic certificate based mutual authentication protocol between two mobile hosts in a mobile grid environment. The certificate generation process is done by CA (Certificate Authority) for all the authenticated MHs. Security (because of validity period of the certificate) and dynamicity (transmission time) can be achieved through the secure service certificates. Authentication protocol is built on communication services to provide cryptographically secured mechanisms for verifying the identity of users and resources.

Keywords: mobile grid computing, certificate authority (CA), SSL/TLS protocol, secured service certificates

Procedia PDF Downloads 285
1661 Integrated Risk Management in The Supply Chain of Essential Medicines in Zambia

Authors: Mario M. J. Musonda

Abstract:

Access to health care is a human right, which includes having timely access to affordable and quality essential medicines at the right place and in sufficient quantity. However, inefficient public sector supply chain management contributes to constant shortages of essential medicines at health facilities. Literature review involved a desktop study of published research studies and reports on risk management, supply chain management of essential medicines and their integration to increase the efficiency of the latter. The research was conducted on a sample population of offices under Ministry of Health Headquarters, Lusaka Provincial and District Offices, selected health facilities in Lusaka, Medical Stores Limited, Zambia Medicines Regulatory Authority and Cooperating Partners. Individuals involved in study were selected judgmentally by their functions under selection and quantification, regulation, procurement, storage, distribution, quality assurance, and dispensing of essential medicines. Structured interviews and discussions were held with selected experts and self-administered questionnaires were distributed. Collected and analysed data of 35 returned and usable questionnaires from the 50 distributed. The highest prioritised risks were; inadequate and inconsistent fund disbursements, weak information management systems, weak quality management systems and insufficient resources (HR and infrastructure) among others. The results for this research can be used to increase the efficiency of the public sector supply chain of essential medicines and other pharmaceuticals. The results of the study showed that there is need to implement effective risk management systems by participating institutions and organisations to increase the efficiency of the entire supply chain in order to avoid and/or reduce shortages of essential medicines at health facilities.

Keywords: essential medicine, risk assessment, risk management, supply chain, supply chain risk management

Procedia PDF Downloads 422
1660 Phytogeography and Regional Conservation Status of Gymnosperms in Pakistan

Authors: Raees Khan, Mir A. Khan, Sheikh Z. Ul Abidin, Abdul S. Mumtaz

Abstract:

In the present study, phytogeography and conservation status of gymnosperms of Pakistan were investigated. 44 gymnosperms species of 18 genera and 9 families were collected from 66 districts of the country. Among the 44 species, 20 species were native (wild) and 24 species were exotic (cultivated). Ephedra sarocarpa of Ephedraceae was not collected in this study from its distribution area and most probably it may be Nationally Extinct now from this area. Previously in Gymnosperms Flora of Pakistan 34 species was reported. 12 new gymnosperms species were recorded for the first time. Pinus wallichiana (40 districts), Cedrus deodara (39 districts) Pinus roxburghii (36 districts), Picea smithiana (36 districts) and Abies pindrow (34 districts) have the maximum ecological amplitude. Juniperus communis (17districts) and Juniperus excelsa (14 districts) were the widely distributed among the junipers. Ephedra foliata (23 districts), Ephedra gerardiana (20 districts) and Ephedra intermedia (19 districts) has the widest distribution range. Taxus fuana was also wider distribution range and recorded in 19 districts but its population was not very stable. These species was recorded to support local flora and fuana, especially endemics. PCORD version 5 clustered all gymnosperms species into 4 communities and all localities into 5 groups through cluster analyses. The Two Way Cluster Analyses of 66 districts (localities) resulted 4 various plant communities. The Gymnosperms in Pakistan are distributed in 3 floristic regions i.e. Western plains of the country, Northern and Western mountainous regions and Western Himalayas. The assessment of the National conservation status of these species, 10 species were found to be threatened, 6 species were endangered, 4 species were critically endangered and 1 species have become extinct (Ephedra sarcocarpa). The population of some species i.e. Taxus fuana, Ephedra gerardiana, Ephedra monosperma, Picea smithiana and Abies spectabilis is decreasing at an alarming rate.

Keywords: conservation status, gymnosperms, phytogeography, Pakistan

Procedia PDF Downloads 235
1659 Useful Characteristics of Pleurotus Mushroom Hybrids

Authors: Suvalux Chaichuchote, Ratchadaporn Thonghem

Abstract:

Pleurotus mushroom is one of popular edible mushrooms in Thailand. It is much favored by consumers due to its delicious taste and high nutrition. It is commonly used as an ingredient in several dishes. The commercially cultivated strain grown in most farms is the Pleurotus sp., Hed Bhutan, that is widely distributed to mushroom farms throughout the country and can be cultivated almost all year round. However, it demands different cultivated strains from mushroom growers, therefore, the improving mushroom strains should be done to their benefits. In this study, we used a di-mon mating method to hybrid production from Hed Bhutan (P-3) as dikaryon material and monokaryotic mycelium were isolated from basidiospores of other three Pleurotus sp. by single spore isolation. The 3 hybrids: P-3XSA-6, P-3XSB-24 and P-3XSE-5 were recognized from the 12 hybridized successfully. They were appropriate hybridized in terms of fruiting body performance in the three time cycles of cultivation such as the number of days until growing, time for pinning, color and shape of fruiting bodies and yield. For genetic study, genomic DNAs of both Hed Bhutan (P-3) and three hybrids were extracted. A couple of primer ITS1 and ITS4 were used to amplify the gene coding for ITS1, ITS2 and 5.8S rRNA. The similarities between these amplified genes and databases of DNA revealed that Hed Bhutan (P-3) was the Pleurotus pulmonarius as well as P-3XSA-6, P-3XSB-24 and P-3XSE-5 hybrids. Furthermore, Hed Bhutan (P3) and three hybrids were distributed to 3 small-scale farms, with mushroom farming experience, in the countryside. To address this, one hundred and twenty mushroom bags of each strain were supplied to them. The findings, by interview, indicated two mushroom farmers were satisfied with P-3XSA-6 hybrid and P-3XSB-24 hybrid, thanks to their simultaneous fruiting time and good yield. While the other was satisfied with P-3XSB-24 hybrid due to its good yield and P-3XSE-5 hybrids thanks to its gradually fruiting body, benefiting in frequent harvest. Overall, farmers adopted all hybrids to grow as commercially cultivated strains as well as Hed Bhutan (P-3) strain.

Keywords: dikaryon, monokaryon, pleurotus, strain improvement

Procedia PDF Downloads 231
1658 Modeling Socioeconomic and Political Dynamics of Terrorism in Pakistan

Authors: Syed Toqueer, Omer Younus

Abstract:

Terrorism, today, has emerged as a global menace with Pakistan being the most adversely affected state. Therefore, the motive behind this study is to empirically establish the linkage of terrorism with socio-economic (uneven income distribution, poverty and unemployment) and political nexuses so that a policy recommendation can be put forth to better approach this issue in Pakistan. For this purpose, the study employs two competing models, namely, the distributed lag model and OLS, so that findings of the model may be consolidated comprehensively, over the reference period of 1984-2012. The findings of both models are indicative of the fact that uneven income distribution of Pakistan is rather a contributing factor towards terrorism when measured through GDP per capita. This supports the hypothesis that immiserizing modernization theory is applicable for the state of Pakistan where the underprivileged are marginalized. Results also suggest that other socio-economic variables (poverty, unemployment and consumer confidence) can condense the brutality of terrorism once these conditions are catered to and improved. The rational of opportunity cost is at the base of this argument. Poor conditions of employment and poverty reduces the opportunity cost for individuals to be recruited by terrorist organizations as economic returns are considerably low and thus increasing the supply of volunteers and subsequently increasing the intensity of terrorism. The argument of political freedom as a means of lowering terrorism stands true. The more the people are politically repressed the more alternative and illegal means they will find to make their voice heard. Also, the argument that politically transitioning economy faces more terrorism is found applicable for Pakistan. Finally, the study contributes to an ongoing debate on which of the two set of factors are more significant with relation to terrorism by suggesting that socio-economic factors are found to be the primary causes of terrorism for Pakistan.

Keywords: terrorism, socioeconomic conditions, political freedom, distributed lag model, ordinary least square

Procedia PDF Downloads 305
1657 Assessing the Accessibility to Primary Percutaneous Coronary Intervention

Authors: Tzu-Jung Tseng, Pei-Hsuen Han, Tsung-Hsueh Lu

Abstract:

Background: Ensuring patients with ST-elevation myocardial infarction (STEMI) access to hospitals that could perform percutaneous coronary intervention (PCI) in time is an important concern of healthcare managers. One commonly used the method to assess the coverage of population access to PCI hospital is the use GIS-estimated linear distance (crow's fly distance) between the district centroid and the nearest PCI hospital. If the distance is within a given distance (such as 20 km), the entire population of that district is considered to have appropriate access to PCI. The premise of using district centroid to estimate the coverage of population resident in that district is that the people live in the district are evenly distributed. In reality, the population density is not evenly distributed within the administrative district, especially in rural districts. Fortunately, the Taiwan government released basic statistical area (on average 450 population within the area) recently, which provide us an opportunity to estimate the coverage of population access to PCI services more accurate. Objectives: We aimed in this study to compare the population covered by a give PCI hospital according to traditional administrative district versus basic statistical area. We further examined if the differences between two geographic units used would be larger in a rural area than in urban area. Method: We selected two hospitals in Tainan City for this analysis. Hospital A is in urban area, hospital B is in rural area. The population in each traditional administrative district and basic statistical area are obtained from Taiwan National Geographic Information System, Ministry of Internal Affairs. Results: Estimated population live within 20 km of hospital A and B was 1,515,846 and 323,472 according to traditional administrative district and was 1,506,325 and 428,556 according to basic statistical area. Conclusion: In urban area, the estimated access population to PCI services was similar between two geographic units. However, in rural areas, the access population would be overestimated.

Keywords: accessibility, basic statistical area, modifiable areal unit problem (MAUP), percutaneous coronary intervention (PCI)

Procedia PDF Downloads 436
1656 Efficiency of an Algae-Zinc Complex Compared to Inorganic Zinc Sulfate on Broilers Performance

Authors: R. Boulmane, C. Alleno, D. Marzin

Abstract:

Trace minerals play an essential role in vital processes and are essential to many biological and physiological functions of the animal. They are usually incorporated in the form of inorganic salts such as sulfates and oxides. Most of these inorganic salts are excreted undigested by the animal causing economic losses as well as environmental pollution. In this context, the use of alternative organic trace minerals with higher bioavailability is emerging. This study was set up to evaluate the effect of using an algae-zinc complex in replacement of zinc sulfate in the feed, on growth performance of broiler chickens. One-thousand-two-hundred 1-day-old chicks were randomly distributed to 30 pens, allocated to 1 of 3 groups receiving different diets: the standard diet containing 35ppm of inorganic zinc sulfate (C+), a test diet containing 35ppm of algae-based zinc (T+), and a test diet containing half dose (16ppm) of algae-based zinc (T-). Three different feeds were distributed from D0-D11, D11-D21 and D21-D35. Individual weighing of the animals (D21 and D35), feed consumption (D11, D21 and D35) and pododermatitis occurrence (D35) were monitored. Data were submitted to analysis of variance. Results show that in finishing period the ADWG of the T+ and T- groups are significantly higher than the control C+ (+6%, P = 0.03). On the other hand, the FCR for the total period is lower for both the T+ and T- groups than the control C+ (-1.2%, P = 0.04). Pododermatitis scoring also shows less lesions for the test groups with algae-based zinc compared to the control group receiving inorganic one. In the end, this study shows a positive effect of the algae zinc-complex on growth performance of broilers compared to inorganic zinc, both when using full dose (35 ppm) or half dose (16 ppm). The use of algae-zinc complex in the premix shows to be a good alternative to reduce zinc excretion while maintaining performance.

Keywords: algae-zinc complex, broiler performance, organic trace minerals, zinc sulfate

Procedia PDF Downloads 218
1655 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 47