Search results for: network simulator (NS3)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5022

Search results for: network simulator (NS3)

2592 Hormone Replacement Therapy (HRT) and Its Impact on the All-Cause Mortality of UK Women: A Matched Cohort Study 1984-2017

Authors: Nurunnahar Akter, Elena Kulinskaya, Nicholas Steel, Ilyas Bakbergenuly

Abstract:

Although Hormone Replacement Therapy (HRT) is an effective treatment in ameliorating menopausal symptoms, it has mixed effects on different health outcomes, increasing, for instance, the risk of breast cancer. Because of this, many symptomatic women are left untreated. Untreated menopausal symptoms may result in other health issues, which eventually put an extra burden and costs to the health care system. All-cause mortality analysis may explain the net benefits and risks of the HRT therapy. However, it received far less attention in HRT studies. This study investigated the impact of HRT on all-cause mortality using electronically recorded primary care data from The Health Improvement Network (THIN) that broadly represents the female population in the United Kingdom (UK). The study entry date for this study was the record of the first HRT prescription from 1984, and patients were followed up until death or transfer to another GP practice or study end date, which was January 2017. 112,354 HRT users (cases) were matched with 245,320 non-users by age at HRT initiation and general practice (GP). The hazards of all-cause mortality associated with HRT were estimated by a parametric Weibull-Cox model adjusting for a wide range of important medical, lifestyle, and socio-demographic factors. The multilevel multiple imputation techniques were used to deal with missing data. This study found that during 32 years of follow-up, combined HRT reduced the hazard ratio (HR) of all-cause mortality by 9% (HR: 0.91; 95% Confidence Interval, 0.88-0.94) in women of age between 46 to 65 at first treatment compared to the non-users of the same age. Age-specific mortality analyses found that combined HRT decreased mortality by 13% (HR: 0.87; 95% CI, 0.82-0.92), 12% (HR: 0.88; 95% CI, 0.82-0.93), and 8% (HR: 0.92; 95% CI, 0.85-0.98), in 51 to 55, 56 to 60, and 61 to 65 age group at first treatment, respectively. There was no association between estrogen-only HRT and women’s all-cause mortality. The findings from this study may help to inform the choices of women at menopause and to further educate the clinicians and resource planners.

Keywords: hormone replacement therapy, multiple imputations, primary care data, the health improvement network (THIN)

Procedia PDF Downloads 170
2591 Impact of Agricultural Infrastructure on Diffusion of Technology of the Sample Farmers in North 24 Parganas District, West Bengal

Authors: Saikat Majumdar, D. C. Kalita

Abstract:

The Agriculture sector plays an important role in the rural economy of India. It is the backbone of our Indian economy and is the dominant sector in terms of employment and livelihood. Agriculture still contributes significantly to export earnings and is an important source of raw materials as well as of demand for many industrial products particularly fertilizers, pesticides, agricultural implements and a variety of consumer goods, etc. The performance of the agricultural sector influences the growth of Indian economy. According to the 2011 Agricultural Census of India, an estimated 61.5 percentage of rural populations are dependent on agriculture. Proper Agricultural infrastructure has the potential to transform the existing traditional agriculture into a most modern, commercial and dynamic farming system in India through its diffusion of technology. The rate of adoption of modern technology reflects the progress of development in agricultural sector. The adoption of any improved agricultural technology is also dependent on the development of road infrastructure or road network. The present study was consisting of 300 sample farmers out which 150 samples was taken from the developed area and rest 150 samples was taken from underdeveloped area. The samples farmers under develop and underdeveloped areas were collected by using Multistage Random Sampling procedure. In the first stage, North 24 Parganas District have been selected purposively. Then from the district, one developed and one underdeveloped block was selected randomly. In the third phase, 10 villages have been selected randomly from each block. Finally, from each village 15 sample farmers was selected randomly. The extents of adoption of technology in different areas were calculated through various parameters. These are percentage area under High Yielding Variety Cereals, percentage area under High Yielding Variety pulses, area under hybrids vegetables, irrigated area, mechanically operated area, amount spent on fertilizer and pesticides, etc. in both developed and underdeveloped areas of North 24 Parganas District, West Bengal. The percentage area under High Yielding Variety Cereals in the developed and underdeveloped areas was 34.86 and 22.59. 42.07 percentages and 31.46 percentages for High Yielding Variety pulses respectively. In the case the area under irrigation it was 57.66 and 35.71 percent while for the mechanically operated area it was 10.60 and 3.13 percent respectively in developed and underdeveloped areas of North 24 Parganas district, West Bengal. It clearly showed that the extent of adoption of technology was significantly higher in the developed area over underdeveloped area. Better road network system helps the farmers in increasing his farm income, farm assets, cropping intensity, marketed surplus and the rate of adoption of new technology. With this background, an attempt is made in this paper to study the impact of Agricultural Infrastructure on the adoption of modern technology in agriculture in North 24 Parganas District, West Bengal.

Keywords: agricultural infrastructure, adoption of technology, farm income, road network

Procedia PDF Downloads 101
2590 Multiscale Process Modeling Analysis for the Prediction of Composite Strength Allowables

Authors: Marianna Maiaru, Gregory M. Odegard

Abstract:

During the processing of high-performance thermoset polymer matrix composites, chemical reactions occur during elevated pressure and temperature cycles, causing the constituent monomers to crosslink and form a molecular network that gradually can sustain stress. As the crosslinking process progresses, the material naturally experiences a gradual shrinkage due to the increase in covalent bonds in the network. Once the cured composite completes the cure cycle and is brought to room temperature, the thermal expansion mismatch of the fibers and matrix cause additional residual stresses to form. These compounded residual stresses can compromise the reliability of the composite material and affect the composite strength. Composite process modeling is greatly complicated by the multiscale nature of the composite architecture. At the molecular level, the degree of cure controls the local shrinkage and thermal-mechanical properties of the thermoset. At the microscopic level, the local fiber architecture and packing affect the magnitudes and locations of residual stress concentrations. At the macroscopic level, the layup sequence controls the nature of crack initiation and propagation due to residual stresses. The goal of this research is use molecular dynamics (MD) and finite element analysis (FEA) to predict the residual stresses in composite laminates and the corresponding effect on composite failure. MD is used to predict the polymer shrinkage and thermomechanical properties as a function of degree of cure. This information is used as input into FEA to predict the residual stresses on the microscopic level resulting from the complete cure process. Virtual testing is subsequently conducted to predict strength allowables. Experimental characterization is used to validate the modeling.

Keywords: molecular dynamics, finite element analysis, processing modeling, multiscale modeling

Procedia PDF Downloads 92
2589 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 145
2588 Angiogenesis and Blood Flow: The Role of Blood Flow in Proliferation and Migration of Endothelial Cells

Authors: Hossein Bazmara, Kaamran Raahemifar, Mostafa Sefidgar, Madjid Soltani

Abstract:

Angiogenesis is formation of new blood vessels from existing vessels. Due to flow of blood in vessels, during angiogenesis, blood flow plays an important role in regulating the angiogenesis process. Multiple mathematical models of angiogenesis have been proposed to simulate the formation of the complicated network of capillaries around a tumor. In this work, a multi-scale model of angiogenesis is developed to show the effect of blood flow on capillaries and network formation. This model spans multiple temporal and spatial scales, i.e. intracellular (molecular), cellular, and extracellular (tissue) scales. In intracellular or molecular scale, the signaling cascade of endothelial cells is obtained. Two main stages in development of a vessel are considered. In the first stage, single sprouts are extended toward the tumor. In this stage, the main regulator of endothelial cells behavior is the signals from extracellular matrix. After anastomosis and formation of closed loops, blood flow starts in the capillaries. In this stage, blood flow induced signals regulate endothelial cells behaviors. In cellular scale, growth and migration of endothelial cells is modeled with a discrete lattice Monte Carlo method called cellular Pott's model (CPM). In extracellular (tissue) scale, diffusion of tumor angiogenic factors in the extracellular matrix, formation of closed loops (anastomosis), and shear stress induced by blood flow is considered. The model is able to simulate the formation of a closed loop and its extension. The results are validated against experimental data. The results show that, without blood flow, the capillaries are not able to maintain their integrity.

Keywords: angiogenesis, endothelial cells, multi-scale model, cellular Pott's model, signaling cascade

Procedia PDF Downloads 425
2587 An Investigation Enhancing E-Voting Application Performance

Authors: Aditya Verma

Abstract:

E-voting using blockchain provides us with a distributed system where data is present on each node present in the network and is reliable and secure too due to its immutability property. This work compares various blockchain consensus algorithms used for e-voting applications in the past, based on performance and node scalability, and chooses the optimal one and improves on one such previous implementation by proposing solutions for the loopholes of the optimally working blockchain consensus algorithm, in our chosen application, e-voting.

Keywords: blockchain, parallel bft, consensus algorithms, performance

Procedia PDF Downloads 167
2586 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 115
2585 Simulation: A Tool for Stabilization of Welding Processes in Lean Production Concepts

Authors: Ola Jon Mork, Lars Andre Giske, Emil Bjørlykhaug

Abstract:

Stabilization of critical processes in order to have the right quality of the products, more efficient production and smoother flow is a key issue in lean production. This paper presents how simulation of key welding processes can stabilize complicated welding processes in small scale production, and how simulation can impact the entire production concept seen from the perspective of lean production. First, a field study was made to learn the production processes in the factory, and subsequently the field study was transformed into a value stream map to get insight into each operation, the quality issues, operation times, lead times and flow of materials. Valuable practical knowledge of how the welding operations were done by operators, appropriate tools and jigs, and type of robots that could be used, was collected. All available information was then implemented into a simulation environment for further elaboration and development. Three researchers, the management of the company and skilled operators at the work floor where working on the project over a period of eight months, and a detailed description of the process was made by the researchers. The simulation showed that simulation could solve a number of technical challenges, the robot program can be tuned in off line mode, and the design and testing of the robot cell could be made in the simulator. Further on the design of the product could be optimized for robot welding and the jigs could be designed and tested in simulation environment. This means that a key issue of lean production can be solved; the welding operation will work with almost 100% performance when it is put into real production. Stabilizing of one key process is critical to gain control of the entire value chain, then a Takt Time can be established and the focus can be directed towards the next process in the production which should be stabilized. Results show that industrial parameters like welding time, welding cost and welding quality can be defined on the simulation stage. Further on, this gives valuable information for calculation of the factories business performance, like manufacturing volume and manufacturing efficiency. Industrial impact from simulation is more efficient implementation of lean manufacturing, since the welding process can be stabilized. More research should be done to gain more knowledge about simulation as a tool for implementation of lean, especially where there complex processes.

Keywords: simulation, lean, stabilization, welding process

Procedia PDF Downloads 321
2584 Emerging Trends of Geographic Information Systems in Built Environment Education: A Bibliometric Review Analysis

Authors: Kiara Lawrence, Robynne Hansmann, Clive Greentsone

Abstract:

Geographic Information Systems (GIS) are used to store, analyze, visualize, capture and monitor geographic data. Built environment professionals as well as urban planners specifically, need to possess GIS skills to effectively and efficiently plan spaces. GIS application extends beyond the production of map artifacts and can be applied to relate to spatially referenced, real time data to support spatial visualization, analysis, community engagement, scenarios, and so forth. Though GIS has been used in the built environment for a few decades, its use in education has not been researched enough to draw conclusions on the trends in the last 20 years. The study looks to discover current and emerging trends of GIS in built environment education. A bibliometric review analysis methodology was carried out through exporting documents from Scopus and Web of Science using keywords around "Geographic information systems" OR "GIS" AND "built environment" OR “geography” OR "architecture" OR "quantity surveying" OR "construction" OR "urban planning" OR "town planning" AND “education” between the years 1994 to 2024. A total of 564 documents were identified and exported. The data was then analyzed using VosViewer software to generate network analysis and visualization maps on the co-occurrence of keywords, co-citation of documents and countries and co-author network analysis. By analyzing each aspect of the data, deeper insight of GIS within education can be understood. Preliminary results from Scopus indicate that GIS research focusing on built environment education seems to have peaked prior to 2014 with much focus on remote sensing, demography, land use, engineering education and so forth. This invaluable data can help in understanding and implementing GIS in built environment education in ways that are foundational and innovative to ensure that students are equipped with sufficient knowledge and skills to carry out tasks in their respective fields.

Keywords: architecture, built environment, construction, education, geography, geographic information systems, quantity surveying, town planning, urban planning

Procedia PDF Downloads 15
2583 A Mathematical Framework for Expanding a Railway’s Theoretical Capacity

Authors: Robert L. Burdett, Bayan Bevrani

Abstract:

Analytical techniques for measuring and planning railway capacity expansion activities have been considered in this article. A preliminary mathematical framework involving track duplication and section sub divisions is proposed for this task. In railways, these features have a great effect on network performance and for this reason they have been considered. Additional motivations have also arisen from the limitations of prior models that have not included them.

Keywords: capacity analysis, capacity expansion, railways, track sub division, track duplication

Procedia PDF Downloads 359
2582 Disease Trajectories in Relation to Poor Sleep Health in the UK Biobank

Authors: Jiajia Peng, Jianqing Qiu, Jianjun Ren, Yu Zhao

Abstract:

Background: Insufficient sleep has been focused on as a public health epidemic. However, a comprehensive analysis of disease trajectory associated with unhealthy sleep habits is still unclear currently. Objective: This study sought to comprehensively clarify the disease's trajectory in relation to the overall poor sleep pattern and unhealthy sleep behaviors separately. Methods: 410,682 participants with available information on sleep behaviors were collected from the UK Biobank at the baseline visit (2006-2010). These participants were classified as having high- and low risk of each sleep behavior and were followed from 2006 to 2020 to identify the increased risks of diseases. We used Cox regression to estimate the associations of high-risk sleep behaviors with the elevated risks of diseases, and further established diseases trajectory using significant diseases. The low-risk unhealthy sleep behaviors were defined as the reference. Thereafter, we also examined the trajectory of diseases linked with the overall poor sleep pattern by combining all of these unhealthy sleep behaviors. To visualize the disease's trajectory, network analysis was used for presenting these trajectories. Results: During a median follow-up of 12.2 years, we noted 12 medical conditions in relation to unhealthy sleep behaviors and the overall poor sleep pattern among 410,682 participants with a median age of 58.0 years. The majority of participants had unhealthy sleep behaviors; in particular, 75.62% with frequent sleeplessness, and 72.12% had abnormal sleep durations. Besides, a total of 16,032 individuals with an overall poor sleep pattern were identified. In general, three major disease clusters were associated with overall poor sleep status and unhealthy sleep behaviors according to the disease trajectory and network analysis, mainly in the digestive, musculoskeletal and connective tissue, and cardiometabolic systems. Of note, two circularity disease pairs (I25→I20 and I48→I50) showed the highest risks following these unhealthy sleep habits. Additionally, significant differences in disease trajectories were observed in relation to sex and sleep medication among individuals with poor sleep status. Conclusions: We identified the major disease clusters and high-risk diseases following participants with overall poor sleep health and unhealthy sleep behaviors, respectively. It may suggest the need to investigate the potential interventions targeting these key pathways.

Keywords: sleep, poor sleep, unhealthy sleep behaviors, disease trajectory, UK Biobank

Procedia PDF Downloads 92
2581 GRABTAXI: A Taxi Revolution in Thailand

Authors: Danuvasin Charoen

Abstract:

The study investigates the business process and business model of GRABTAXI. The paper also discusses how the company implemented strategies to gain competitive advantages. The data is derived from the analysis of secondary data and the in-depth interviews among staffs, taxi drivers, and key customers. The findings indicated that the company’s competitive advantages come from being the first mover, emphasising on the ease of use and tangible benefits of application, and using network effect strategy.

Keywords: taxi, mobile application, innovative business model, Thailand

Procedia PDF Downloads 299
2580 Intrusion Detection Techniques in NaaS in the Cloud: A Review

Authors: Rashid Mahmood

Abstract:

The network as a service (NaaS) usage has been well-known from the last few years in the many applications, like mission critical applications. In the NaaS, prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in NaaS. The authentication and encryption are considered the first solution of the NaaS problem whereas now these are not sufficient as NaaS use is increasing. In this paper, we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in NaaS and aim to compare in some important fields.

Keywords: IDS, cloud, naas, detection

Procedia PDF Downloads 320
2579 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning

Authors: Richard O’Riordan, Saritha Unnikrishnan

Abstract:

Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.

Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection

Procedia PDF Downloads 104
2578 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 71
2577 Remote Sensing and GIS Based Methodology for Identification of Low Crop Productivity in Gautam Buddha Nagar District

Authors: Shivangi Somvanshi

Abstract:

Poor crop productivity in salt-affected environment in the country is due to insufficient and untimely canal supply to agricultural land and inefficient field water management practices. This could further degrade due to inadequate maintenance of canal network, ongoing secondary soil salinization and waterlogging, worsening of groundwater quality. Large patches of low productivity in irrigation commands are occurring due to waterlogging and salt-affected soil, particularly in the scarcity rainfall year. Satellite remote sensing has been used for mapping of areas of low crop productivity, waterlogging and salt in irrigation commands. The spatial results obtained for these problems so far are less reliable for further use due to rapid change in soil quality parameters over the years. The existing spatial databases of canal network and flow data, groundwater quality and salt-affected soil were obtained from the central and state line departments/agencies and were integrated with GIS. Therefore, an integrated methodology based on remote sensing and GIS has been developed in ArcGIS environment on the basis of canal supply status, groundwater quality, salt-affected soils, and satellite-derived vegetation index (NDVI), salinity index (NDSI) and waterlogging index (NSWI). This methodology was tested for identification and delineation of area of low productivity in the Gautam Buddha Nagar district (Uttar Pradesh). It was found that the area affected by this problem lies mainly in Dankaur and Jewar blocks of the district. The problem area was verified with ground data and was found to be approximately 78% accurate. The methodology has potential to be used in other irrigation commands in the country to obtain reliable spatial data on low crop productivity.

Keywords: remote sensing, GIS, salt affected soil, crop productivity, Gautam Buddha Nagar

Procedia PDF Downloads 287
2576 Green Crypto Mining: A Quantitative Analysis of the Profitability of Bitcoin Mining Using Excess Wind Energy

Authors: John Dorrell, Matthew Ambrosia, Abilash

Abstract:

This paper employs econometric analysis to quantify the potential profit wind farms can receive by allocating excess wind energy to power bitcoin mining machines. Cryptocurrency mining consumes a substantial amount of electricity worldwide, and wind energy produces a significant amount of energy that is lost because of the intermittent nature of the resource. Supply does not always match consumer demand. By combining the weaknesses of these two technologies, we can improve efficiency and a sustainable path to mine cryptocurrencies. This paper uses historical wind energy from the ERCOT network in Texas and cryptocurrency data from 2000-2021, to create 4-year return on investment projections. Our research model incorporates the price of bitcoin, the price of the miner, the hash rate of the miner relative to the network hash rate, the block reward, the bitcoin transaction fees awarded to the miners, the mining pool fees, the cost of the electricity and the percentage of time the miner will be running to demonstrate that wind farms generate enough excess energy to mine bitcoin profitably. Excess wind energy can be used as a financial battery, which can utilize wasted electricity by changing it into economic energy. The findings of our research determine that wind energy producers can earn profit while not taking away much if any, electricity from the grid. According to our results, Bitcoin mining could give as much as 1347% and 805% return on investment with the starting dates of November 1, 2021, and November 1, 2022, respectively, using wind farm curtailment. This paper is helpful to policymakers and investors in determining efficient and sustainable ways to power our economic future. This paper proposes a practical solution for the problem of crypto mining energy consumption and creates a more sustainable energy future for Bitcoin.

Keywords: bitcoin, mining, economics, energy

Procedia PDF Downloads 34
2575 Undersea Communications Infrastructure: Risks, Opportunities, and Geopolitical Considerations

Authors: Lori W. Gordon, Karen A. Jones

Abstract:

Today’s high-speed data connectivity depends on a vast global network of infrastructure across space, air, land, and sea, with undersea cable infrastructure (UCI) serving as the primary means for intercontinental and ‘long-haul’ communications. The UCI landscape is changing and includes an increasing variety of state actors, such as the growing economies of Brazil, Russia, India, China, and South Africa. Non-state commercial actors, such as hyper-scale content providers including Google, Facebook, Microsoft, and Amazon, are also seeking to control their data and networks through significant investments in submarine cables. Active investments by both state and non-state actors will invariably influence the growth, geopolitics, and security of this sector. Beyond these hyper-scale content providers, there are new commercial satellite communication providers. These new players include traditional geosynchronous (GEO) satellites that offer broad coverage, high throughput GEO satellites offering high capacity with spot beam technology, low earth orbit (LEO) ‘mega constellations’ – global broadband services. And potential new entrants such as High Altitude Platforms (HAPS) offer low latency connectivity, LEO constellations offer high-speed optical mesh networks, i.e., ‘fiber in the sky.’ This paper focuses on understanding the role of submarine cables within the larger context of the global data commons, spanning space, terrestrial, air, and sea networks, including an analysis of national security policy and geopolitical implications. As network operators and commercial and government stakeholders plan for emerging technologies and architectures, hedging risks for future connectivity will ensure that our data backbone will be secure for years to come.

Keywords: communications, global, infrastructure, technology

Procedia PDF Downloads 87
2574 Constructing a Probabilistic Ontology from a DBLP Data

Authors: Emna Hlel, Salma Jamousi, Abdelmajid Ben Hamadou

Abstract:

Every model for knowledge representation to model real-world applications must be able to cope with the effects of uncertain phenomena. One of main defects of classical ontology is its inability to represent and reason with uncertainty. To remedy this defect, we try to propose a method to construct probabilistic ontology for integrating uncertain information in an ontology modeling a set of basic publications DBLP (Digital Bibliography & Library Project) using a probabilistic model.

Keywords: classical ontology, probabilistic ontology, uncertainty, Bayesian network

Procedia PDF Downloads 347
2573 Bioinformatic Prediction of Hub Genes by Analysis of Signaling Pathways, Transcriptional Regulatory Networks and DNA Methylation Pattern in Colon Cancer

Authors: Ankan Roy, Niharika, Samir Kumar Patra

Abstract:

Anomalous nexus of complex topological assemblies and spatiotemporal epigenetic choreography at chromosomal territory may forms the most sophisticated regulatory layer of gene expression in cancer. Colon cancer is one of the leading malignant neoplasms of the lower gastrointestinal tract worldwide. There is still a paucity of information about the complex molecular mechanisms of colonic cancerogenesis. Bioinformatics prediction and analysis helps to identify essential genes and significant pathways for monitoring and conquering this deadly disease. The present study investigates and explores potential hub genes as biomarkers and effective therapeutic targets for colon cancer treatment. Colon cancer patient sample containing gene expression profile datasets, such as GSE44076, GSE20916, and GSE37364 were downloaded from Gene Expression Omnibus (GEO) database and thoroughly screened using the GEO2R tool and Funrich software to find out common 2 differentially expressed genes (DEGs). Other approaches, including Gene Ontology (GO) and KEGG pathway analysis, Protein-Protein Interaction (PPI) network construction and hub gene investigation, Overall Survival (OS) analysis, gene correlation analysis, methylation pattern analysis, and hub gene-Transcription factors regulatory network construction, were performed and validated using various bioinformatics tool. Initially, we identified 166 DEGs, including 68 up-regulated and 98 down-regulated genes. Up-regulated genes are mainly associated with the Cytokine-cytokine receptor interaction, IL17 signaling pathway, ECM-receptor interaction, Focal adhesion and PI3K-Akt pathway. Downregulated genes are enriched in metabolic pathways, retinol metabolism, Steroid hormone biosynthesis, and bile secretion. From the protein-protein interaction network, thirty hub genes with high connectivity are selected using the MCODE and cytoHubba plugin. Survival analysis, expression validation, correlation analysis, and methylation pattern analysis were further verified using TCGA data. Finally, we predicted COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as potential master regulators in colonic cancerogenesis. Moreover, our experimental data highlights that disruption of lipid raft and RAS/MAPK signaling cascade affects this gene hub at mRNA level. We identified COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as determinant hub genes in colon cancer progression. They can be considered as biomarkers for diagnosis and promising therapeutic targets in colon cancer treatment. Additionally, our experimental data advertise that signaling pathway act as connecting link between membrane hub and gene hub.

Keywords: hub genes, colon cancer, DNA methylation, epigenetic engineering, bioinformatic predictions

Procedia PDF Downloads 128
2572 An Inventory Management Model to Manage the Stock Level for Irregular Demand Items

Authors: Riccardo Patriarca, Giulio Di Gravio, Francesco Costantino, Massimo Tronci

Abstract:

An accurate inventory management policy acquires a crucial role in the several high-availability sectors. In these sectors, due to the high-cost of spares and backorders, an (S-1, S) replenishment policy is necessary for high-availability items. The policy enables the shipment of a substitute efficient item anytime the inventory size decreases by one. This policy can be modelled following the Multi-Echelon Technique for Recoverable Item Control (METRIC). The METRIC is a system-based technique that allows defining the optimum stock level in a multi-echelon network, adopting measures in line with the decision-maker’s perspective. The METRIC defines an availability-cost function with inventory costs and required service levels, using as inputs data about the demand trend, the supplying and maintenance characteristics of the network and the budget/availability constraints. The traditional METRIC relies on the hypothesis that a Poisson distribution well represents the demand distribution in case of items with a low failure rate. However, in this research, we will explore the effects of using a Poisson distribution to model the demand of low failure rate items characterized by an irregular demand trend. This characteristic of a demand is not included in the traditional METRIC formulation leading to the need of revising its traditional formulation. Using the CV (Coefficient of Variation) and ADI (Average inter-Demand Interval) classification, we will define the inherent flaws of Poisson-based METRIC for irregular demand items, defining an innovative ad hoc distribution which can better fit the irregular demands. This distribution will allow defining proper stock levels to reduce stocking and backorder costs due to the high irregularities in the demand trend. A case study in the aviation domain will clarify the benefits of this innovative METRIC approach.

Keywords: METRIC, inventory management, irregular demand, spare parts

Procedia PDF Downloads 347
2571 Prototype of an Interactive Toy from Lego Robotics Kits for Children with Autism

Authors: Ricardo A. Martins, Matheus S. da Silva, Gabriel H. F. Iarossi, Helen C. M. Senefonte, Cinthyan R. S. C. de Barbosa

Abstract:

This paper is the development of a concept of the man/robot interaction. More accurately in developing of an autistic child that have more troubles with interaction, here offers an efficient solution, even though simple; however, less studied for this public. This concept is based on code applied thought out the Lego NXT kit, built for the interpretation of the robot, thereby can create this interaction in a constructive way for children suffering with Autism.

Keywords: lego NXT, interaction, BricX, autismo, ANN (Artificial Neural Network), MLP back propagation, hidden layers

Procedia PDF Downloads 569
2570 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model

Authors: Danjuma Bawa

Abstract:

This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.

Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics

Procedia PDF Downloads 147
2569 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 75
2568 Intrusion Detection in SCADA Systems

Authors: Leandros A. Maglaras, Jianmin Jiang

Abstract:

The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.

Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection

Procedia PDF Downloads 552
2567 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 137
2566 Advantages of Neural Network Based Air Data Estimation for Unmanned Aerial Vehicles

Authors: Angelo Lerro, Manuela Battipede, Piero Gili, Alberto Brandl

Abstract:

Redundancy requirements for UAV (Unmanned Aerial Vehicle) are hardly faced due to the generally restricted amount of available space and allowable weight for the aircraft systems, limiting their exploitation. Essential equipment as the Air Data, Attitude and Heading Reference Systems (ADAHRS) require several external probes to measure significant data as the Angle of Attack or the Sideslip Angle. Previous research focused on the analysis of a patented technology named Smart-ADAHRS (Smart Air Data, Attitude and Heading Reference System) as an alternative method to obtain reliable and accurate estimates of the aerodynamic angles. This solution is based on an innovative sensor fusion algorithm implementing soft computing techniques and it allows to obtain a simplified inertial and air data system reducing external devices. In fact, only one external source of dynamic and static pressures is needed. This paper focuses on the benefits which would be gained by the implementation of this system in UAV applications. A simplification of the entire ADAHRS architecture will bring to reduce the overall cost together with improved safety performance. Smart-ADAHRS has currently reached Technology Readiness Level (TRL) 6. Real flight tests took place on ultralight aircraft equipped with a suitable Flight Test Instrumentation (FTI). The output of the algorithm using the flight test measurements demonstrates the capability for this fusion algorithm to embed in a single device multiple physical and virtual sensors. Any source of dynamic and static pressure can be integrated with this system gaining a significant improvement in terms of versatility.

Keywords: aerodynamic angles, air data system, flight test, neural network, unmanned aerial vehicle, virtual sensor

Procedia PDF Downloads 221
2565 Flexible Communication Platform for Crisis Management

Authors: Jiří Barta, Tomáš Ludík, Jiří Urbánek

Abstract:

The topics of disaster and emergency management are highly debated among experts. Fast communication will help to deal with emergencies. Problem is with the network connection and data exchange. The paper suggests a solution, which allows possibilities and perspectives of new flexible communication platform to the protection of communication systems for crisis management. This platform is used for everyday communication and communication in crisis situations too.

Keywords: crisis management, information systems, interoperability, crisis communication, security environment, communication platform

Procedia PDF Downloads 475
2564 Assessing the Impact of High Fidelity Human Patient Simulation on Teamwork among Nursing, Medicine and Pharmacy Undergraduate Students

Authors: S. MacDonald, A. Manuel, R. Law, N. Bandruak, A. Dubrowski, V. Curran, J. Smith-Young, K. Simmons, A. Warren

Abstract:

High fidelity human patient simulation has been used for many years by health sciences education programs to foster critical thinking, engage learners, improve confidence, improve communication, and enhance psychomotor skills. Unfortunately, there is a paucity of research on the use of high fidelity human patient simulation to foster teamwork among nursing, medicine and pharmacy undergraduate students. This study compared the impact of high fidelity and low fidelity simulation education on teamwork among nursing, medicine and pharmacy students. For the purpose of this study, two innovative teaching scenarios were developed based on the care of an adult patient experiencing acute anaphylaxis: one high fidelity using a human patient simulator and one low fidelity using case based discussions. A within subjects, pretest-posttest, repeated measures design was used with two-treatment levels and random assignment of individual subjects to teams of two or more professions. A convenience sample of twenty-four (n=24) undergraduate students participated, including: nursing (n=11), medicine (n=9), and pharmacy (n=4). The Interprofessional Teamwork Questionnaire was used to assess for changes in students’ perception of their functionality within the team, importance of interprofessional collaboration, comprehension of roles, and confidence in communication and collaboration. Student satisfaction was also assessed. Students reported significant improvements in their understanding of the importance of interprofessional teamwork and of the roles of nursing and medicine on the team after participation in both the high fidelity and the low fidelity simulation. However, only participants in the high fidelity simulation reported a significant improvement in their ability to function effectively as a member of the team. All students reported that both simulations were a meaningful learning experience and all students would recommend both experiences to other students. These findings suggest there is merit in both high fidelity and low fidelity simulation as a teaching and learning approach to foster teamwork among undergraduate nursing, medicine and pharmacy students. However, participation in high fidelity simulation may provide a more realistic opportunity to practice and function as an effective member of the interprofessional health care team.

Keywords: acute anaphylaxis, high fidelity human patient simulation, low fidelity simulation, interprofessional education

Procedia PDF Downloads 231
2563 Performance Assessment of Carrier Aggregation-Based Indoor Mobile Networks

Authors: Viktor R. Stoynov, Zlatka V. Valkova-Jarvis

Abstract:

The intelligent management and optimisation of radio resource technologies will lead to a considerable improvement in the overall performance in Next Generation Networks (NGNs). Carrier Aggregation (CA) technology, also known as Spectrum Aggregation, enables more efficient use of the available spectrum by combining multiple Component Carriers (CCs) in a virtual wideband channel. LTE-A (Long Term Evolution–Advanced) CA technology can combine multiple adjacent or separate CCs in the same band or in different bands. In this way, increased data rates and dynamic load balancing can be achieved, resulting in a more reliable and efficient operation of mobile networks and the enabling of high bandwidth mobile services. In this paper, several distinct CA deployment strategies for the utilisation of spectrum bands are compared in indoor-outdoor scenarios, simulated via the recently-developed Realistic Indoor Environment Generator (RIEG). We analyse the performance of the User Equipment (UE) by integrating the average throughput, the level of fairness of radio resource allocation, and other parameters, into one summative assessment termed a Comparative Factor (CF). In addition, comparison of non-CA and CA indoor mobile networks is carried out under different load conditions: varying numbers and positions of UEs. The experimental results demonstrate that the CA technology can improve network performance, especially in the case of indoor scenarios. Additionally, we show that an increase of carrier frequency does not necessarily lead to improved CF values, due to high wall-penetration losses. The performance of users under bad-channel conditions, often located in the periphery of the cells, can be improved by intelligent CA location. Furthermore, a combination of such a deployment and effective radio resource allocation management with respect to user-fairness plays a crucial role in improving the performance of LTE-A networks.

Keywords: comparative factor, carrier aggregation, indoor mobile network, resource allocation

Procedia PDF Downloads 178