Search results for: distributed algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3865

Search results for: distributed algorithms

2965 Signs, Signals and Syndromes: Algorithmic Surveillance and Global Health Security in the 21st Century

Authors: Stephen L. Roberts

Abstract:

This article offers a critical analysis of the rise of syndromic surveillance systems for the advanced detection of pandemic threats within contemporary global health security frameworks. The article traces the iterative evolution and ascendancy of three such novel syndromic surveillance systems for the strengthening of health security initiatives over the past two decades: 1) The Program for Monitoring Emerging Diseases (ProMED-mail); 2) The Global Public Health Intelligence Network (GPHIN); and 3) HealthMap. This article demonstrates how each newly introduced syndromic surveillance system has become increasingly oriented towards the integration of digital algorithms into core surveillance capacities to continually harness and forecast upon infinitely generating sets of digital, open-source data, potentially indicative of forthcoming pandemic threats. This article argues that the increased centrality of the algorithm within these next-generation syndromic surveillance systems produces a new and distinct form of infectious disease surveillance for the governing of emergent pathogenic contingencies. Conceptually, the article also shows how the rise of this algorithmic mode of infectious disease surveillance produces divergences in the governmental rationalities of global health security, leading to the rise of an algorithmic governmentality within contemporary contexts of Big Data and these surveillance systems. Empirically, this article demonstrates how this new form of algorithmic infectious disease surveillance has been rapidly integrated into diplomatic, legal, and political frameworks to strengthen the practice of global health security – producing subtle, yet distinct shifts in the outbreak notification and reporting transparency of states, increasingly scrutinized by the algorithmic gaze of syndromic surveillance.

Keywords: algorithms, global health, pandemic, surveillance

Procedia PDF Downloads 166
2964 Regret-Regression for Multi-Armed Bandit Problem

Authors: Deyadeen Ali Alshibani

Abstract:

In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.

Keywords: optimal, bandit problem, optimization, dynamic programming

Procedia PDF Downloads 439
2963 Integrated Risk Management in The Supply Chain of Essential Medicines in Zambia

Authors: Mario M. J. Musonda

Abstract:

Access to health care is a human right, which includes having timely access to affordable and quality essential medicines at the right place and in sufficient quantity. However, inefficient public sector supply chain management contributes to constant shortages of essential medicines at health facilities. Literature review involved a desktop study of published research studies and reports on risk management, supply chain management of essential medicines and their integration to increase the efficiency of the latter. The research was conducted on a sample population of offices under Ministry of Health Headquarters, Lusaka Provincial and District Offices, selected health facilities in Lusaka, Medical Stores Limited, Zambia Medicines Regulatory Authority and Cooperating Partners. Individuals involved in study were selected judgmentally by their functions under selection and quantification, regulation, procurement, storage, distribution, quality assurance, and dispensing of essential medicines. Structured interviews and discussions were held with selected experts and self-administered questionnaires were distributed. Collected and analysed data of 35 returned and usable questionnaires from the 50 distributed. The highest prioritised risks were; inadequate and inconsistent fund disbursements, weak information management systems, weak quality management systems and insufficient resources (HR and infrastructure) among others. The results for this research can be used to increase the efficiency of the public sector supply chain of essential medicines and other pharmaceuticals. The results of the study showed that there is need to implement effective risk management systems by participating institutions and organisations to increase the efficiency of the entire supply chain in order to avoid and/or reduce shortages of essential medicines at health facilities.

Keywords: essential medicine, risk assessment, risk management, supply chain, supply chain risk management

Procedia PDF Downloads 425
2962 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products

Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola

Abstract:

The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.

Keywords: decision making, design euristics, product design, product design process, design paradigms

Procedia PDF Downloads 101
2961 Phytogeography and Regional Conservation Status of Gymnosperms in Pakistan

Authors: Raees Khan, Mir A. Khan, Sheikh Z. Ul Abidin, Abdul S. Mumtaz

Abstract:

In the present study, phytogeography and conservation status of gymnosperms of Pakistan were investigated. 44 gymnosperms species of 18 genera and 9 families were collected from 66 districts of the country. Among the 44 species, 20 species were native (wild) and 24 species were exotic (cultivated). Ephedra sarocarpa of Ephedraceae was not collected in this study from its distribution area and most probably it may be Nationally Extinct now from this area. Previously in Gymnosperms Flora of Pakistan 34 species was reported. 12 new gymnosperms species were recorded for the first time. Pinus wallichiana (40 districts), Cedrus deodara (39 districts) Pinus roxburghii (36 districts), Picea smithiana (36 districts) and Abies pindrow (34 districts) have the maximum ecological amplitude. Juniperus communis (17districts) and Juniperus excelsa (14 districts) were the widely distributed among the junipers. Ephedra foliata (23 districts), Ephedra gerardiana (20 districts) and Ephedra intermedia (19 districts) has the widest distribution range. Taxus fuana was also wider distribution range and recorded in 19 districts but its population was not very stable. These species was recorded to support local flora and fuana, especially endemics. PCORD version 5 clustered all gymnosperms species into 4 communities and all localities into 5 groups through cluster analyses. The Two Way Cluster Analyses of 66 districts (localities) resulted 4 various plant communities. The Gymnosperms in Pakistan are distributed in 3 floristic regions i.e. Western plains of the country, Northern and Western mountainous regions and Western Himalayas. The assessment of the National conservation status of these species, 10 species were found to be threatened, 6 species were endangered, 4 species were critically endangered and 1 species have become extinct (Ephedra sarcocarpa). The population of some species i.e. Taxus fuana, Ephedra gerardiana, Ephedra monosperma, Picea smithiana and Abies spectabilis is decreasing at an alarming rate.

Keywords: conservation status, gymnosperms, phytogeography, Pakistan

Procedia PDF Downloads 239
2960 Recent Advances in Data Warehouse

Authors: Fahad Hanash Alzahrani

Abstract:

This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.

Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing

Procedia PDF Downloads 382
2959 Refining Scheme Using Amphibious Epistemologies

Authors: David Blaine, George Raschbaum

Abstract:

The evaluation of DHCP has synthesized SCSI disks, and current trends suggest that the exploration of e-business that would allow for further study into robots will soon emerge. Given the current status of embedded algorithms, hackers worldwide obviously desire the exploration of replication, which embodies the confusing principles of programming languages. In our research we concentrate our efforts on arguing that erasure coding can be made "fuzzy", encrypted, and game-theoretic.

Keywords: SCHI disks, robot, algorithm, hacking, programming language

Procedia PDF Downloads 404
2958 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 109
2957 Useful Characteristics of Pleurotus Mushroom Hybrids

Authors: Suvalux Chaichuchote, Ratchadaporn Thonghem

Abstract:

Pleurotus mushroom is one of popular edible mushrooms in Thailand. It is much favored by consumers due to its delicious taste and high nutrition. It is commonly used as an ingredient in several dishes. The commercially cultivated strain grown in most farms is the Pleurotus sp., Hed Bhutan, that is widely distributed to mushroom farms throughout the country and can be cultivated almost all year round. However, it demands different cultivated strains from mushroom growers, therefore, the improving mushroom strains should be done to their benefits. In this study, we used a di-mon mating method to hybrid production from Hed Bhutan (P-3) as dikaryon material and monokaryotic mycelium were isolated from basidiospores of other three Pleurotus sp. by single spore isolation. The 3 hybrids: P-3XSA-6, P-3XSB-24 and P-3XSE-5 were recognized from the 12 hybridized successfully. They were appropriate hybridized in terms of fruiting body performance in the three time cycles of cultivation such as the number of days until growing, time for pinning, color and shape of fruiting bodies and yield. For genetic study, genomic DNAs of both Hed Bhutan (P-3) and three hybrids were extracted. A couple of primer ITS1 and ITS4 were used to amplify the gene coding for ITS1, ITS2 and 5.8S rRNA. The similarities between these amplified genes and databases of DNA revealed that Hed Bhutan (P-3) was the Pleurotus pulmonarius as well as P-3XSA-6, P-3XSB-24 and P-3XSE-5 hybrids. Furthermore, Hed Bhutan (P3) and three hybrids were distributed to 3 small-scale farms, with mushroom farming experience, in the countryside. To address this, one hundred and twenty mushroom bags of each strain were supplied to them. The findings, by interview, indicated two mushroom farmers were satisfied with P-3XSA-6 hybrid and P-3XSB-24 hybrid, thanks to their simultaneous fruiting time and good yield. While the other was satisfied with P-3XSB-24 hybrid due to its good yield and P-3XSE-5 hybrids thanks to its gradually fruiting body, benefiting in frequent harvest. Overall, farmers adopted all hybrids to grow as commercially cultivated strains as well as Hed Bhutan (P-3) strain.

Keywords: dikaryon, monokaryon, pleurotus, strain improvement

Procedia PDF Downloads 234
2956 Modeling Socioeconomic and Political Dynamics of Terrorism in Pakistan

Authors: Syed Toqueer, Omer Younus

Abstract:

Terrorism, today, has emerged as a global menace with Pakistan being the most adversely affected state. Therefore, the motive behind this study is to empirically establish the linkage of terrorism with socio-economic (uneven income distribution, poverty and unemployment) and political nexuses so that a policy recommendation can be put forth to better approach this issue in Pakistan. For this purpose, the study employs two competing models, namely, the distributed lag model and OLS, so that findings of the model may be consolidated comprehensively, over the reference period of 1984-2012. The findings of both models are indicative of the fact that uneven income distribution of Pakistan is rather a contributing factor towards terrorism when measured through GDP per capita. This supports the hypothesis that immiserizing modernization theory is applicable for the state of Pakistan where the underprivileged are marginalized. Results also suggest that other socio-economic variables (poverty, unemployment and consumer confidence) can condense the brutality of terrorism once these conditions are catered to and improved. The rational of opportunity cost is at the base of this argument. Poor conditions of employment and poverty reduces the opportunity cost for individuals to be recruited by terrorist organizations as economic returns are considerably low and thus increasing the supply of volunteers and subsequently increasing the intensity of terrorism. The argument of political freedom as a means of lowering terrorism stands true. The more the people are politically repressed the more alternative and illegal means they will find to make their voice heard. Also, the argument that politically transitioning economy faces more terrorism is found applicable for Pakistan. Finally, the study contributes to an ongoing debate on which of the two set of factors are more significant with relation to terrorism by suggesting that socio-economic factors are found to be the primary causes of terrorism for Pakistan.

Keywords: terrorism, socioeconomic conditions, political freedom, distributed lag model, ordinary least square

Procedia PDF Downloads 310
2955 Assessing the Accessibility to Primary Percutaneous Coronary Intervention

Authors: Tzu-Jung Tseng, Pei-Hsuen Han, Tsung-Hsueh Lu

Abstract:

Background: Ensuring patients with ST-elevation myocardial infarction (STEMI) access to hospitals that could perform percutaneous coronary intervention (PCI) in time is an important concern of healthcare managers. One commonly used the method to assess the coverage of population access to PCI hospital is the use GIS-estimated linear distance (crow's fly distance) between the district centroid and the nearest PCI hospital. If the distance is within a given distance (such as 20 km), the entire population of that district is considered to have appropriate access to PCI. The premise of using district centroid to estimate the coverage of population resident in that district is that the people live in the district are evenly distributed. In reality, the population density is not evenly distributed within the administrative district, especially in rural districts. Fortunately, the Taiwan government released basic statistical area (on average 450 population within the area) recently, which provide us an opportunity to estimate the coverage of population access to PCI services more accurate. Objectives: We aimed in this study to compare the population covered by a give PCI hospital according to traditional administrative district versus basic statistical area. We further examined if the differences between two geographic units used would be larger in a rural area than in urban area. Method: We selected two hospitals in Tainan City for this analysis. Hospital A is in urban area, hospital B is in rural area. The population in each traditional administrative district and basic statistical area are obtained from Taiwan National Geographic Information System, Ministry of Internal Affairs. Results: Estimated population live within 20 km of hospital A and B was 1,515,846 and 323,472 according to traditional administrative district and was 1,506,325 and 428,556 according to basic statistical area. Conclusion: In urban area, the estimated access population to PCI services was similar between two geographic units. However, in rural areas, the access population would be overestimated.

Keywords: accessibility, basic statistical area, modifiable areal unit problem (MAUP), percutaneous coronary intervention (PCI)

Procedia PDF Downloads 441
2954 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population

Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath

Abstract:

Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.

Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics

Procedia PDF Downloads 140
2953 Sensor Registration in Multi-Static Sonar Fusion Detection

Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin

Abstract:

In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.

Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem

Procedia PDF Downloads 151
2952 Efficiency of an Algae-Zinc Complex Compared to Inorganic Zinc Sulfate on Broilers Performance

Authors: R. Boulmane, C. Alleno, D. Marzin

Abstract:

Trace minerals play an essential role in vital processes and are essential to many biological and physiological functions of the animal. They are usually incorporated in the form of inorganic salts such as sulfates and oxides. Most of these inorganic salts are excreted undigested by the animal causing economic losses as well as environmental pollution. In this context, the use of alternative organic trace minerals with higher bioavailability is emerging. This study was set up to evaluate the effect of using an algae-zinc complex in replacement of zinc sulfate in the feed, on growth performance of broiler chickens. One-thousand-two-hundred 1-day-old chicks were randomly distributed to 30 pens, allocated to 1 of 3 groups receiving different diets: the standard diet containing 35ppm of inorganic zinc sulfate (C+), a test diet containing 35ppm of algae-based zinc (T+), and a test diet containing half dose (16ppm) of algae-based zinc (T-). Three different feeds were distributed from D0-D11, D11-D21 and D21-D35. Individual weighing of the animals (D21 and D35), feed consumption (D11, D21 and D35) and pododermatitis occurrence (D35) were monitored. Data were submitted to analysis of variance. Results show that in finishing period the ADWG of the T+ and T- groups are significantly higher than the control C+ (+6%, P = 0.03). On the other hand, the FCR for the total period is lower for both the T+ and T- groups than the control C+ (-1.2%, P = 0.04). Pododermatitis scoring also shows less lesions for the test groups with algae-based zinc compared to the control group receiving inorganic one. In the end, this study shows a positive effect of the algae zinc-complex on growth performance of broilers compared to inorganic zinc, both when using full dose (35 ppm) or half dose (16 ppm). The use of algae-zinc complex in the premix shows to be a good alternative to reduce zinc excretion while maintaining performance.

Keywords: algae-zinc complex, broiler performance, organic trace minerals, zinc sulfate

Procedia PDF Downloads 226
2951 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 56
2950 Keynote Talk: The Role of Internet of Things in the Smart Cities Power System

Authors: Abdul-Rahman Al-Ali

Abstract:

As the number of mobile devices is growing exponentially, it is estimated to connect about 50 million devices to the Internet by the year 2020. At the end of this decade, it is expected that an average of eight connected devices per person worldwide. The 50 billion devices are not mobile phones and data browsing gadgets only, but machine-to-machine and man-to-machine devices. With such growing numbers of devices the Internet of Things (I.o.T) concept is one of the emerging technologies as of recently. Within the smart grid technologies, smart home appliances, Intelligent Electronic Devices (IED) and Distributed Energy Resources (DER) are major I.o.T objects that can be addressable using the IPV6. These objects are called the smart grid internet of things (SG-I.o.T). The SG-I.o.T generates big data that requires high-speed computing infrastructure, widespread computer networks, big data storage, software, and platforms services. A company’s utility control and data centers cannot handle such a large number of devices, high-speed processing, and massive data storage. Building large data center’s infrastructure takes a long time, it also requires widespread communication networks and huge capital investment. To maintain and upgrade control and data centers’ infrastructure and communication networks as well as updating and renewing software licenses which collectively, requires additional cost. This can be overcome by utilizing the emerging computing paradigms such as cloud computing. This can be used as a smart grid enabler to replace the legacy of utilities data centers. The talk will highlight the role of I.o.T, cloud computing services and their development models within the smart grid technologies.

Keywords: intelligent electronic devices (IED), distributed energy resources (DER), internet, smart home appliances

Procedia PDF Downloads 307
2949 Short-Term Effects of Extreme Temperatures on Cause Specific Cardiovascular Admissions in Beijing, China

Authors: Deginet Aklilu, Tianqi Wang, Endwoke Amsalu, Wei Feng, Zhiwei Li, Xia Li, Lixin Tao, Yanxia Luo, Moning Guo, Xiangtong Liu, Xiuhua Guo

Abstract:

Extreme temperature-related cardiovascular diseases (CVDs) have become a growing public health concern. However, the impact of temperature on the cause of specific CVDs has not been well studied in the study area. The objective of this study was to assess the impact of temperature on cause-specific cardiovascular hospital admissions in Beijing, China. We obtained data from 172 large general hospitals from the Beijing Public Health Information Center Cardiovascular Case Database and China. Meteorological Administration covering 16 districts in Beijing from 2013 to 2017. We used a time-stratified case crossover design with a distributed lag nonlinear model (DLNM) to derive the impact of temperature on CVD in hospitals back to 27 days on CVD admissions. The temperature data were stratified as cold (extreme and moderate ) and hot (moderate and extreme ). Within five years (January 2013-December 2017), a total of 460,938 (male 54.9% and female 45.1%) CVD admission cases were reported. The exposure-response relationship for hospitalization was described by a "J" shape for the total and cause-specific. An increase in the six-day moving average temperature from moderate hot (30.2 °C) to extreme hot (36.9 °C) resulted in a significant increase in CVD admissions of 16.1%(95% CI = 12.8%-28.9%). However, the effect of cold temperature exposure on CVD admissions over a lag time of 0-27 days was found to be non significant, with a relative risk of 0.45 (95% CI = 0.378-0.55) for extreme cold (-8.5 °C)and 0.53 (95% CI = 0.47-0.60) for moderate cold (-5.6 °C). The results of this study indicate that exposure to extremely high temperatures is highly associated with an increase in cause-specific CVD admissions. These finding may guide to create and raise awareness of the general population, government and private sectors regarding on the effects of current weather conditions on CVD.

Keywords: admission, Beijing, cardiovascular diseases, distributed lag non linear model, temperature

Procedia PDF Downloads 43
2948 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 105
2947 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 54
2946 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions

Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla

Abstract:

With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.

Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect

Procedia PDF Downloads 11
2945 Analysis of Landscape Pattern Evolution in Banan District, Chongqing, Based on GIS and FRAGSTATS

Authors: Wenyang Wan

Abstract:

The study of urban land use and landscape pattern is the current hotspot in the fields of planning and design, ecology, etc., which is of great significance for the construction of the overall humanistic ecosystem of the city and optimization of the urban spatial structure. Banan District, as the main part of the eastern eco-city planning of Chongqing Municipality, is a new high ground for highlighting the ecological characteristics of Chongqing, realizing effective transformation of ecological value, and promoting the integrated development of urban and rural areas. The analytical methods of land use transfer matrix (GIS) and landscape pattern index (Fragstats) were used to study the characteristics and laws of the evolution of land use landscape pattern in Banan District from 2000 to 2020, which provide some reference value for Banan District to alleviate the ecological contradiction of landscape. The results of the study show that: ① Banan District is rich in land use types, of which the area of cultivated land will still account for 57.15% of the total area of the landscape until 2020, accounting for an absolute advantage in the land use structure of Banan District; ② From 2000 to 2020, land use conversion in Banan District is characterized as: Cropland > woodland > grassland > shrubland > built-up land > water bodies > wetlands, with cropland converted to built-up land being the largest; ③ From 2000 to 2020, the landscape elements of Banan District were distributed in a balanced way, and the landscape types were rich and diversified, but due to the influence of human interference, it also presented the characteristics that the shape of the landscape elements tended to be irregular, and the dominant patches were distributed in a scattered manner, and the patches had poor connectivity. It is recommended that in future regional ecological construction, the layout should be rationally optimized, the relationship between landscape components should be coordinated, and the connectivity between landscape patches should be strengthened, and the degree of landscape fragmentation should be reduced.

Keywords: land use transfer, landscape pattern evolution, GIS and FRAGSTATS, Banan District

Procedia PDF Downloads 65
2944 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 249
2943 Using the Weakest Precondition to Achieve Self-Stabilization in Critical Networks

Authors: Antonio Pizzarello, Oris Friesen

Abstract:

Networks, such as the electric power grid, must demonstrate exemplary performance and integrity. Integrity depends on the quality of both the system design model and the deployed software. Integrity of the deployed software is key, for both the original versions and the many that occur throughout numerous maintenance activity. Current software engineering technology and practice do not produce adequate integrity. Distributed systems utilize networks where each node is an independent computer system. The connections between them is realized via a network that is normally redundantly connected to guarantee the presence of a path between two nodes in the case of failure of some branch. Furthermore, at each node, there is software which may fail. Self-stabilizing protocols are usually present that recognize failure in the network and perform a repair action that will bring the node back to a correct state. These protocols first introduced by E. W. Dijkstra are currently present in almost all Ethernets. Super stabilization protocols capable of reacting to a change in the network topology due to the removal or addition of a branch in the network are less common but are theoretically defined and available. This paper describes how to use the Software Integrity Assessment (SIA) methodology to analyze self-stabilizing software. SIA is based on the UNITY formalism for parallel and distributed programming, which allows the analysis of code for verifying the progress property p leads-to q that describes the progress of all computations starting in a state satisfying p to a state satisfying q via the execution of one or more system modules. As opposed to demonstrably inadequate test and evaluation methods SIA allows the analysis and verification of any network self-stabilizing software as well as any other software that is designed to recover from failure without external intervention of maintenance personnel. The model to be analyzed is obtained by automatic translation of the system code to a transition system that is based on the use of the weakest precondition.

Keywords: network, power grid, self-stabilization, software integrity assessment, UNITY, weakest precondition

Procedia PDF Downloads 208
2942 Formulation of Famotidine Solid Lipid Nanoparticles (SLN): Preparation, Evaluation and Release Study

Authors: Rachmat Mauludin, Nurmazidah

Abstract:

Background and purpose: Famotidine is an H2 receptor blocker. Absorption orally is rapid enough, but famotidine can be degraded by stomach acid causing dose reduction until 35.8% after 50 minutes. This drug also undergoes first-pass metabolism which reduced its bio availability only until 40-50%. To overcome these problems, Solid Lipid Nano particles (SLNs) as alternative delivery systems can be formulated. SLNs is a lipid-based drug delivery technology with 50-1000 nm particle size, where the drug incorporated into the bio compatible lipids and the lipid particles are stabilized using appropriate stabilizers. When the particle size is 200 nm or below, lipid containing famotidine can be absorbed through the lymphatic vessels to the subclavian vein, so first-pass metabolism can be avoided. Method: Famotidine SLNs with various compositions of stabilizer was prepared using a high-speed homogenization and sonication method. Then, the particle size distribution, zeta potential, entrapment efficiency, particle morphology and in vitro release profiles were evaluated. Optimization of sonication time also carried out. Result: Particle size of SLN by Particle Size Analyzer was in range 114.6 up to 455.267 nm. Ultrasonicated SLNs within 5 minutes generated smaller particle size than SLNs which was ultrasonicated for 10 and 15 minutes. Entrapment efficiency of SLNs were 74.17 up to 79.45%. Particle morphology of the SLNs was spherical and distributed individually. Release study of Famotidine revealed that in acid medium, 28.89 up to 80.55% of famotidine could be released after 2 hours. Nevertheless in basic medium, famotidine was released 40.5 up to 86.88% in the same period. Conclusion: The best formula was SLNs which stabilized by 4% Poloxamer 188 and 1 % Span 20, that had particle size 114.6 nm in diameter, 77.14% famotidine entrapped, and the particle morphology was spherical and distributed individually. SLNs with the best drug release profile was SLNs which stabilized by 4% Eudragit L 100-55 and 1% Tween 80 which had released 36.34 % in pH 1.2 solution, and 74.13% in pH 7.4 solution after 2 hours. The optimum sonication time was 5 minutes.

Keywords: famotodine, SLN, high speed homogenization, particle size, release study

Procedia PDF Downloads 840
2941 Analysis of the Evolution of Landscape Spatial Patterns in Banan District, Chongqing, China

Authors: Wenyang Wan

Abstract:

The study of urban land use and landscape pattern is the current hotspot in the fields of planning and design, ecology, etc., which is of great significance for the construction of the overall humanistic ecosystem of the city and optimization of the urban spatial structure. Banan District, as the main part of the eastern eco-city planning of Chongqing Municipality, is a high ground for highlighting the ecological characteristics of Chongqing, realizing effective transformation of ecological value, and promoting the integrated development of urban and rural areas. The analytical methods of land use transfer matrix (GIS) and landscape pattern index (Fragstats) were used to study the characteristics and laws of the evolution of land use landscape pattern in Banan District from 2000 to 2020, which provide some reference value for Banan District to alleviate the ecological contradiction of landscape. The results of the study show that ① Banan District is rich in land use types, of which the area of cultivated land will still account for 57.15% of the total area of the landscape until 2020, accounting for an absolute advantage in land use structure of Banan District; ② From 2000 to 2020, land use conversion in Banan District is characterized as Cropland > woodland > grassland > shrubland > built-up land > water bodies > wetlands, with cropland converted to built-up land being the largest; ③ From 2000 to 2020, the landscape elements of Banan District were distributed in a balanced way, and the landscape types were rich and diversified, but due to the influence of human interference, it also presented the characteristics that the shape of the landscape elements tended to be irregular, and the dominant patches were distributed in a scattered manner, and the patches had poor connectivity. It is recommended that in future regional ecological construction, the layout should be rationally optimized, the relationship between landscape components should be coordinated, the connectivity between landscape patches should be strengthened, and the degree of landscape fragmentation should be reduced.

Keywords: land use transfer, landscape pattern evolution, GIS and Fragstats, Banan district

Procedia PDF Downloads 55
2940 Aeroelastic Analysis of Nonlinear All-Movable Fin with Freeplay in Low-Speed

Authors: Laith K. Abbas, Xiaoting Rui, Pier Marzocca

Abstract:

Aerospace systems, generally speaking, are inherently nonlinear. These nonlinearities may modify the behavior of the system. However, nonlinearities in an aeroelastic system can be divided into structural and aerodynamic. Structural nonlinearities can be subdivided into distributed and concentrated ones. Distributed nonlinearities are spread over the whole structure representing the characteristic of materials and large motions. Concentrated nonlinearities act locally, representing loose of attachments, worn hinges of control surfaces, and the presence of external stores. The concentrated nonlinearities can be approximated by one of the classical structural nonlinearities, namely, cubic, free-play and hysteresis, or by a combination of these, for example, a free-play and a cubic one. Compressibility, aerodynamic heating, separated flows and turbulence effects are important aspects that result in nonlinear aerodynamic behavior. An issue related to the low-speed flutter and its catastrophic/benign character represented by Limit Cycle Oscillation (LCO) of all-movable fin, as well to their control is addressed in the present work. To the approach of this issue: (1) Quasi-Steady (QS) Theory and Computational Fluid Dynamics (CFD) of subsonic flow are implemented, (2) Flutter motion equations of a two-dimensional typical section with cubic nonlinear stiffness in the pitching direction and free play gap are established, (3) Uncoupled bending/torsion frequencies of the selected fin are computed using recently developed Transfer Matrix Method of Multibody System Dynamics (MSTMM), and (4) Time simulations are carried out to study the bifurcation behavior of the aeroelastic system. The main objective of this study is to investigate how the LCO and chaotic behavior are influenced by the coupled aeroelastic nonlinearities and intend to implement a control capability enabling one to control both the flutter boundary and its character. By this way, it may expand the operational envelop of the aerospace vehicle without failure.

Keywords: aeroelasticity, CFD, MSTMM, flutter, freeplay, fin

Procedia PDF Downloads 358
2939 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 109
2938 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 56
2937 Impact of Combined Heat and Power (CHP) Generation Technology on Distribution Network Development

Authors: Sreto Boljevic

Abstract:

In the absence of considerable investment in electricity generation, transmission and distribution network (DN) capacity, the demand for electrical energy will quickly strain the capacity of the existing electrical power network. With anticipated growth and proliferation of Electric vehicles (EVs) and Heat pump (HPs) identified the likelihood that the additional load from EV changing and the HPs operation will require capital investment in the DN. While an area-wide implementation of EVs and HPs will contribute to the decarbonization of the energy system, they represent new challenges for the existing low-voltage (LV) network. Distributed energy resources (DER), operating both as part of the DN and in the off-network mode, have been offered as a means to meet growing electricity demand while maintaining and ever-improving DN reliability, resiliency and power quality. DN planning has traditionally been done by forecasting future growth in demand and estimating peak load that the network should meet. However, new problems are arising. These problems are associated with a high degree of proliferation of EVs and HPs as load imposes on DN. In addition to that, the promotion of electricity generation from renewable energy sources (RES). High distributed generation (DG) penetration and a large increase in load proliferation at low-voltage DNs may have numerous impacts on DNs that create issues that include energy losses, voltage control, fault levels, reliability, resiliency and power quality. To mitigate negative impacts and at a same time enhance positive impacts regarding the new operational state of DN, CHP system integration can be seen as best action to postpone/reduce capital investment needed to facilitate promotion and maximize benefits of EVs, HPs and RES integration in low-voltage DN. The aim of this paper is to generate an algorithm by using an analytical approach. Algorithm implementation will provide a way for optimal placement of the CHP system in the DN in order to maximize the integration of RES and increase in proliferation of EVs and HPs.

Keywords: combined heat & power (CHP), distribution networks, EVs, HPs, RES

Procedia PDF Downloads 186
2936 Expression of Somatostatin and Neuropeptide Y in Dorsal Root Ganglia Following Hind Paw Incision in Rats

Authors: Anshu Bahl, Saroj Kaler, Shivani Gupta, S B Ray

Abstract:

Background: Somatostatin is an endogenous regulatory neuropeptide. Somatostatin and its analogues play an important role in neuropathic and inflammatory pain. Neuropeptide Y is extensively distributed in the mammalian nervous system. NPY has an important role in blood pressure, circadian rhythm, obesity, appetite and memory. The purpose was to investigate somatostatin and NPY expression in dorsal root ganglia during pain. The plantar incision model in rats is similar to postoperative pain in humans. Methods: 24 adult male Sprague dawley rats were distributed randomly into two groups – Control (n=6) and incision (n=18) groups. Using Hargreaves apparatus, thermal hyperalgesia behavioural test for nociception was done under basal condition and after surgical incision in right hind paw at different time periods (day 1, 3 and 5). The plantar incision was performed as per standard protocol. Perfusion was done using 4% paraformaldehyde followed by extraction of dorsal root ganglia at L4 level. The tissue was processed for immunohistochemical localisation for somatostatin and neuropeptide Y. Results: Post incisional groups (day 1, 3 and 5) exhibited significant decrease of paw withdrawal latency as compared to control groups. Somatostatin expression was noted under basal conditions. It decreased on day 1, but again gradually increased on day 3 and further on day five post incision. The expression of Neuropeptide Y was noted in the cytoplasm of dorsal root ganglia under basal conditions. Compared to control group, expression of neuropeptide Y decreased on day one after incision, but again gradually increased on day 3. Maximum expression was noted on day five post incision. Conclusion: Decrease in paw withdrawal latency indicated nociception, particularly on day 1. In comparison to control, somatostatin and NPY expression was decreased on day one post incision. This could be correlated with increased axoplasmic flow towards the spinal cord. Somatostatin and NPY expression was maximum on day five post incision. This could be due to decreased migration from the site of synthesis towards the spinal cord.

Keywords: dorsal root ganglia, neuropeptide y, postoperative pain, somatostatin

Procedia PDF Downloads 159