Search results for: feed-forward neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5150

Search results for: feed-forward neural network

2270 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical

Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani

Abstract:

Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.

Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality

Procedia PDF Downloads 333
2269 Design and Simulation of All Optical Fiber to the Home Network

Authors: Rahul Malhotra

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 533
2268 Automated Driving Deep Neural Networks Model Accuracy and Performance Assessment in a Simulated Environment

Authors: David Tena-Gago, Jose M. Alcaraz Calero, Qi Wang

Abstract:

The evolution and integration of automated vehicles have become more and more tangible in recent years. State-of-the-art technological advances in the field of camera-based Artificial Intelligence (AI) and computer vision greatly favor the performance and reliability of the Advanced Driver Assistance System (ADAS), leading to a greater knowledge of vehicular operation and resembling human behavior. However, the exclusive use of this technology still seems insufficient to control vehicular operation at 100%. To reveal the degree of accuracy of the current camera-based automated driving AI modules, this paper studies the structure and behavior of one of the main solutions in a controlled testing environment. The results obtained clearly outline the lack of reliability when using exclusively the AI model in the perception stage, thereby entailing using additional complementary sensors to improve its safety and performance.

Keywords: accuracy assessment, AI-driven mobility, artificial intelligence, automated vehicles

Procedia PDF Downloads 91
2267 U Slot Loaded Wearable Textile Antenna

Authors: Varsha Kheradiya, Ganga Prasad Pandey

Abstract:

The use of wearable antennas is rising because wireless devices become small. The wearable antenna is part of clothes used in communication applications, including energy harvesting, medical application, navigation, and tracking. In current years, Antennas embroidered on clothes, conducting antennas based on fabric, polymer embedded antennas, and inkjet-printed antennas are all attractive ways. Also shows the analysis required for wearable antennas, such as wearable antennae interacting with the human body. The primary requirements for the antenna are small size, low profile minimizing radiation absorption by the human body, high efficiency, structural integrity to survive worst situations, and good gain. Therefore, research in energy harvesting, biomedicine, and military application design is increasingly favoring flexible wearable antennas. Textile materials that are effectively used for designing and developing wearable antennas for body area networks. The wireless body area network is primarily concerned with creating effective antenna systems. The antenna should reduce their size, be lightweight, and be adaptable when integrated into clothes. When antennas integrate into clothes, it provides a convenient alternative to those fabricated using rigid substrates. This paper presents a study of U slot loaded wearable textile antenna. U slot patch antenna design is illustrated for wideband from 1GHz to 6 GHz using textile material jeans as substrate and pure copper polyester taffeta fabric as conducting material. This antenna design exhibits dual band results for WLAN at 2.4 GHz and 3.6 GHz frequencies. Also, study U slot position horizontal and vertical shifting. Shifting the horizontal positive X-axis position of the U slot produces the third band at 5.8 GHz.

Keywords: microstrip patch antenna, textile material, U slot wearable antenna, wireless body area network

Procedia PDF Downloads 67
2266 Continuous Land Cover Change Detection in Subtropical Thicket Ecosystems

Authors: Craig Mahlasi

Abstract:

The Subtropical Thicket Biome has been in peril of transformation. Estimates indicate that as much as 63% of the Subtropical Thicket Biome is severely degraded. Agricultural expansion is the main driver of transformation. While several studies have sought to document and map the long term transformations, there is a lack of information on disturbance events that allow for timely intervention by authorities. Furthermore, tools that seek to perform continuous land cover change detection are often developed for forests and thus tend to perform poorly in thicket ecosystems. This study investigates the utility of Earth Observation data for continuous land cover change detection in Subtropical Thicket ecosystems. Temporal Neural Networks are implemented on a time series of Sentinel-2 observations. The model obtained 0.93 accuracy, a recall score of 0.93, and a precision score of 0.91 in detecting Thicket disturbances. The study demonstrates the potential of continuous land cover change in Subtropical Thicket ecosystems.

Keywords: remote sensing, land cover change detection, subtropical thickets, near-real time

Procedia PDF Downloads 139
2265 Book Exchange System with a Hybrid Recommendation Engine

Authors: Nilki Upathissa, Torin Wirasinghe

Abstract:

This solution addresses the challenges faced by traditional bookstores and the limitations of digital media, striking a balance between the tactile experience of printed books and the convenience of modern technology. The book exchange system offers a sustainable alternative, empowering users to access a diverse range of books while promoting community engagement. The user-friendly interfaces incorporated into the book exchange system ensure a seamless and enjoyable experience for users. Intuitive features for book management, search, and messaging facilitate effortless exchanges and interactions between users. By streamlining the process, the system encourages readers to explore new books aligned with their interests, enhancing the overall reading experience. Central to the system's success is the hybrid recommendation engine, which leverages advanced technologies such as Long Short-Term Memory (LSTM) models. By analyzing user input, the engine accurately predicts genre preferences, enabling personalized book recommendations. The hybrid approach integrates multiple technologies, including user interfaces, machine learning models, and recommendation algorithms, to ensure the accuracy and diversity of the recommendations. The evaluation of the book exchange system with the hybrid recommendation engine demonstrated exceptional performance across key metrics. The high accuracy score of 0.97 highlights the system's ability to provide relevant recommendations, enhancing users' chances of discovering books that resonate with their interests. The commendable precision, recall, and F1score scores further validate the system's efficacy in offering appropriate book suggestions. Additionally, the curve classifications substantiate the system's effectiveness in distinguishing positive and negative recommendations. This metric provides confidence in the system's ability to navigate the vast landscape of book choices and deliver recommendations that align with users' preferences. Furthermore, the implementation of this book exchange system with a hybrid recommendation engine has the potential to revolutionize the way readers interact with printed books. By facilitating book exchanges and providing personalized recommendations, the system encourages a sense of community and exploration within the reading community. Moreover, the emphasis on sustainability aligns with the growing global consciousness towards eco-friendly practices. With its robust technical approach and promising evaluation results, this solution paves the way for a more inclusive, accessible, and enjoyable reading experience for book lovers worldwide. In conclusion, the developed book exchange system with a hybrid recommendation engine represents a progressive solution to the challenges faced by traditional bookstores and the limitations of digital media. By promoting sustainability, widening access to printed books, and fostering engagement with reading, this system addresses the evolving needs of book enthusiasts. The integration of user-friendly interfaces, advanced machine learning models, and recommendation algorithms ensure accurate and diverse book recommendations, enriching the reading experience for users.

Keywords: recommendation systems, hybrid recommendation systems, machine learning, data science, long short-term memory, recurrent neural network

Procedia PDF Downloads 69
2264 Hormone Replacement Therapy (HRT) and Its Impact on the All-Cause Mortality of UK Women: A Matched Cohort Study 1984-2017

Authors: Nurunnahar Akter, Elena Kulinskaya, Nicholas Steel, Ilyas Bakbergenuly

Abstract:

Although Hormone Replacement Therapy (HRT) is an effective treatment in ameliorating menopausal symptoms, it has mixed effects on different health outcomes, increasing, for instance, the risk of breast cancer. Because of this, many symptomatic women are left untreated. Untreated menopausal symptoms may result in other health issues, which eventually put an extra burden and costs to the health care system. All-cause mortality analysis may explain the net benefits and risks of the HRT therapy. However, it received far less attention in HRT studies. This study investigated the impact of HRT on all-cause mortality using electronically recorded primary care data from The Health Improvement Network (THIN) that broadly represents the female population in the United Kingdom (UK). The study entry date for this study was the record of the first HRT prescription from 1984, and patients were followed up until death or transfer to another GP practice or study end date, which was January 2017. 112,354 HRT users (cases) were matched with 245,320 non-users by age at HRT initiation and general practice (GP). The hazards of all-cause mortality associated with HRT were estimated by a parametric Weibull-Cox model adjusting for a wide range of important medical, lifestyle, and socio-demographic factors. The multilevel multiple imputation techniques were used to deal with missing data. This study found that during 32 years of follow-up, combined HRT reduced the hazard ratio (HR) of all-cause mortality by 9% (HR: 0.91; 95% Confidence Interval, 0.88-0.94) in women of age between 46 to 65 at first treatment compared to the non-users of the same age. Age-specific mortality analyses found that combined HRT decreased mortality by 13% (HR: 0.87; 95% CI, 0.82-0.92), 12% (HR: 0.88; 95% CI, 0.82-0.93), and 8% (HR: 0.92; 95% CI, 0.85-0.98), in 51 to 55, 56 to 60, and 61 to 65 age group at first treatment, respectively. There was no association between estrogen-only HRT and women’s all-cause mortality. The findings from this study may help to inform the choices of women at menopause and to further educate the clinicians and resource planners.

Keywords: hormone replacement therapy, multiple imputations, primary care data, the health improvement network (THIN)

Procedia PDF Downloads 150
2263 Impact of Agricultural Infrastructure on Diffusion of Technology of the Sample Farmers in North 24 Parganas District, West Bengal

Authors: Saikat Majumdar, D. C. Kalita

Abstract:

The Agriculture sector plays an important role in the rural economy of India. It is the backbone of our Indian economy and is the dominant sector in terms of employment and livelihood. Agriculture still contributes significantly to export earnings and is an important source of raw materials as well as of demand for many industrial products particularly fertilizers, pesticides, agricultural implements and a variety of consumer goods, etc. The performance of the agricultural sector influences the growth of Indian economy. According to the 2011 Agricultural Census of India, an estimated 61.5 percentage of rural populations are dependent on agriculture. Proper Agricultural infrastructure has the potential to transform the existing traditional agriculture into a most modern, commercial and dynamic farming system in India through its diffusion of technology. The rate of adoption of modern technology reflects the progress of development in agricultural sector. The adoption of any improved agricultural technology is also dependent on the development of road infrastructure or road network. The present study was consisting of 300 sample farmers out which 150 samples was taken from the developed area and rest 150 samples was taken from underdeveloped area. The samples farmers under develop and underdeveloped areas were collected by using Multistage Random Sampling procedure. In the first stage, North 24 Parganas District have been selected purposively. Then from the district, one developed and one underdeveloped block was selected randomly. In the third phase, 10 villages have been selected randomly from each block. Finally, from each village 15 sample farmers was selected randomly. The extents of adoption of technology in different areas were calculated through various parameters. These are percentage area under High Yielding Variety Cereals, percentage area under High Yielding Variety pulses, area under hybrids vegetables, irrigated area, mechanically operated area, amount spent on fertilizer and pesticides, etc. in both developed and underdeveloped areas of North 24 Parganas District, West Bengal. The percentage area under High Yielding Variety Cereals in the developed and underdeveloped areas was 34.86 and 22.59. 42.07 percentages and 31.46 percentages for High Yielding Variety pulses respectively. In the case the area under irrigation it was 57.66 and 35.71 percent while for the mechanically operated area it was 10.60 and 3.13 percent respectively in developed and underdeveloped areas of North 24 Parganas district, West Bengal. It clearly showed that the extent of adoption of technology was significantly higher in the developed area over underdeveloped area. Better road network system helps the farmers in increasing his farm income, farm assets, cropping intensity, marketed surplus and the rate of adoption of new technology. With this background, an attempt is made in this paper to study the impact of Agricultural Infrastructure on the adoption of modern technology in agriculture in North 24 Parganas District, West Bengal.

Keywords: agricultural infrastructure, adoption of technology, farm income, road network

Procedia PDF Downloads 84
2262 Multiscale Process Modeling Analysis for the Prediction of Composite Strength Allowables

Authors: Marianna Maiaru, Gregory M. Odegard

Abstract:

During the processing of high-performance thermoset polymer matrix composites, chemical reactions occur during elevated pressure and temperature cycles, causing the constituent monomers to crosslink and form a molecular network that gradually can sustain stress. As the crosslinking process progresses, the material naturally experiences a gradual shrinkage due to the increase in covalent bonds in the network. Once the cured composite completes the cure cycle and is brought to room temperature, the thermal expansion mismatch of the fibers and matrix cause additional residual stresses to form. These compounded residual stresses can compromise the reliability of the composite material and affect the composite strength. Composite process modeling is greatly complicated by the multiscale nature of the composite architecture. At the molecular level, the degree of cure controls the local shrinkage and thermal-mechanical properties of the thermoset. At the microscopic level, the local fiber architecture and packing affect the magnitudes and locations of residual stress concentrations. At the macroscopic level, the layup sequence controls the nature of crack initiation and propagation due to residual stresses. The goal of this research is use molecular dynamics (MD) and finite element analysis (FEA) to predict the residual stresses in composite laminates and the corresponding effect on composite failure. MD is used to predict the polymer shrinkage and thermomechanical properties as a function of degree of cure. This information is used as input into FEA to predict the residual stresses on the microscopic level resulting from the complete cure process. Virtual testing is subsequently conducted to predict strength allowables. Experimental characterization is used to validate the modeling.

Keywords: molecular dynamics, finite element analysis, processing modeling, multiscale modeling

Procedia PDF Downloads 76
2261 Solving Ill-Posed Initial Value Problems for Switched Differential Equations

Authors: Eugene Stepanov, Arcady Ponosov

Abstract:

To model gene regulatory networks one uses ordinary differential equations with switching nonlinearities, where the initial value problem is known to be well-posed if the trajectories cross the discontinuities transversally. Otherwise, the initial value problem is usually ill-posed, which lead to theoretical and numerical complications. In the presentation, it is proposed to apply the theory of hybrid dynamical systems, rather than switched ones, to regularize the problem. 'Hybridization' of the switched system means that one attaches a dynamic discrete component ('automaton'), which follows the trajectories of the original system and governs its dynamics at the points of ill-posedness of the initial value problem making it well-posed. The construction of the automaton is based on the classification of the attractors of the specially designed adjoint dynamical system. Several examples are provided in the presentation, which support the suggested analysis. The method can also be of interest in other applied fields, where differential equations contain switchings, e.g. in neural field models.

Keywords: hybrid dynamical systems, ill-posed problems, singular perturbation analysis, switching nonlinearities

Procedia PDF Downloads 163
2260 Angiogenesis and Blood Flow: The Role of Blood Flow in Proliferation and Migration of Endothelial Cells

Authors: Hossein Bazmara, Kaamran Raahemifar, Mostafa Sefidgar, Madjid Soltani

Abstract:

Angiogenesis is formation of new blood vessels from existing vessels. Due to flow of blood in vessels, during angiogenesis, blood flow plays an important role in regulating the angiogenesis process. Multiple mathematical models of angiogenesis have been proposed to simulate the formation of the complicated network of capillaries around a tumor. In this work, a multi-scale model of angiogenesis is developed to show the effect of blood flow on capillaries and network formation. This model spans multiple temporal and spatial scales, i.e. intracellular (molecular), cellular, and extracellular (tissue) scales. In intracellular or molecular scale, the signaling cascade of endothelial cells is obtained. Two main stages in development of a vessel are considered. In the first stage, single sprouts are extended toward the tumor. In this stage, the main regulator of endothelial cells behavior is the signals from extracellular matrix. After anastomosis and formation of closed loops, blood flow starts in the capillaries. In this stage, blood flow induced signals regulate endothelial cells behaviors. In cellular scale, growth and migration of endothelial cells is modeled with a discrete lattice Monte Carlo method called cellular Pott's model (CPM). In extracellular (tissue) scale, diffusion of tumor angiogenic factors in the extracellular matrix, formation of closed loops (anastomosis), and shear stress induced by blood flow is considered. The model is able to simulate the formation of a closed loop and its extension. The results are validated against experimental data. The results show that, without blood flow, the capillaries are not able to maintain their integrity.

Keywords: angiogenesis, endothelial cells, multi-scale model, cellular Pott's model, signaling cascade

Procedia PDF Downloads 406
2259 An Investigation Enhancing E-Voting Application Performance

Authors: Aditya Verma

Abstract:

E-voting using blockchain provides us with a distributed system where data is present on each node present in the network and is reliable and secure too due to its immutability property. This work compares various blockchain consensus algorithms used for e-voting applications in the past, based on performance and node scalability, and chooses the optimal one and improves on one such previous implementation by proposing solutions for the loopholes of the optimally working blockchain consensus algorithm, in our chosen application, e-voting.

Keywords: blockchain, parallel bft, consensus algorithms, performance

Procedia PDF Downloads 151
2258 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 100
2257 Packaging Processes for the Implantable Medical Microelectronics

Authors: Chung-Yu Wu, Chia-Chi Chang, Wei-Ming Chen, Pu-Wei Wu, Shih-Fan Chen, Po-Chun Chen

Abstract:

Electrostimulation medical devices for neural diseases require electroactive and biocompatible materials to transmit signals from electrodes to targeting tissues. Protection of surrounding tissues has become a great challenge for long-term implants. In this study, we designed back-end processes with compatible, efficient, and reliable advantages over the current state-of-the-art. We explored a hermetic packaging process with high quality of adhesion and uniformity as the biocompatible devices for long-term implantation. This approach is able to provide both excellent biocompatibility and protection to the biomedical electronic devices by performing conformal coating of biocompatible materials. We successfully developed a packaging process that is capable of exposing the stimulating electrode and cover all other faces of chip with high quality of protection to prevent leakage of devices and body fluid.

Keywords: biocompatible package, medical microelectronics, surface coating, long-term implantation

Procedia PDF Downloads 505
2256 Credit Risk Assessment Using Rule Based Classifiers: A Comparative Study

Authors: Salima Smiti, Ines Gasmi, Makram Soui

Abstract:

Credit risk is the most important issue for financial institutions. Its assessment becomes an important task used to predict defaulter customers and classify customers as good or bad payers. To this objective, numerous techniques have been applied for credit risk assessment. However, to our knowledge, several evaluation techniques are black-box models such as neural networks, SVM, etc. They generate applicants’ classes without any explanation. In this paper, we propose to assess credit risk using rules classification method. Our output is a set of rules which describe and explain the decision. To this end, we will compare seven classification algorithms (JRip, Decision Table, OneR, ZeroR, Fuzzy Rule, PART and Genetic programming (GP)) where the goal is to find the best rules satisfying many criteria: accuracy, sensitivity, and specificity. The obtained results confirm the efficiency of the GP algorithm for German and Australian datasets compared to other rule-based techniques to predict the credit risk.

Keywords: credit risk assessment, classification algorithms, data mining, rule extraction

Procedia PDF Downloads 158
2255 FMR1 Gene Carrier Screening for Premature Ovarian Insufficiency in Females: An Indian Scenario

Authors: Sarita Agarwal, Deepika Delsa Dean

Abstract:

Like the task of transferring photo images to artistic images, image-to-image translation aims to translate the data to the imitated data which belongs to the target domain. Neural Style Transfer and CycleGAN are two well-known deep learning architectures used for photo image-to-art image transfer. However, studies involving these two models concentrate on one-to-one domain translation, not one-to-multi domains translation. Our study tries to investigate deep learning architectures, which can be controlled to yield multiple artistic style translation only by adding a conditional vector. We have expanded CycleGAN and constructed Conditional CycleGAN for 5 kinds of categories translation. Our study found that the architecture inserting conditional vector into the middle layer of the Generator could output multiple artistic images.

Keywords: genetic counseling, FMR1 gene, fragile x-associated primary ovarian insufficiency, premutation

Procedia PDF Downloads 105
2254 A Mathematical Framework for Expanding a Railway’s Theoretical Capacity

Authors: Robert L. Burdett, Bayan Bevrani

Abstract:

Analytical techniques for measuring and planning railway capacity expansion activities have been considered in this article. A preliminary mathematical framework involving track duplication and section sub divisions is proposed for this task. In railways, these features have a great effect on network performance and for this reason they have been considered. Additional motivations have also arisen from the limitations of prior models that have not included them.

Keywords: capacity analysis, capacity expansion, railways, track sub division, track duplication

Procedia PDF Downloads 341
2253 Disease Trajectories in Relation to Poor Sleep Health in the UK Biobank

Authors: Jiajia Peng, Jianqing Qiu, Jianjun Ren, Yu Zhao

Abstract:

Background: Insufficient sleep has been focused on as a public health epidemic. However, a comprehensive analysis of disease trajectory associated with unhealthy sleep habits is still unclear currently. Objective: This study sought to comprehensively clarify the disease's trajectory in relation to the overall poor sleep pattern and unhealthy sleep behaviors separately. Methods: 410,682 participants with available information on sleep behaviors were collected from the UK Biobank at the baseline visit (2006-2010). These participants were classified as having high- and low risk of each sleep behavior and were followed from 2006 to 2020 to identify the increased risks of diseases. We used Cox regression to estimate the associations of high-risk sleep behaviors with the elevated risks of diseases, and further established diseases trajectory using significant diseases. The low-risk unhealthy sleep behaviors were defined as the reference. Thereafter, we also examined the trajectory of diseases linked with the overall poor sleep pattern by combining all of these unhealthy sleep behaviors. To visualize the disease's trajectory, network analysis was used for presenting these trajectories. Results: During a median follow-up of 12.2 years, we noted 12 medical conditions in relation to unhealthy sleep behaviors and the overall poor sleep pattern among 410,682 participants with a median age of 58.0 years. The majority of participants had unhealthy sleep behaviors; in particular, 75.62% with frequent sleeplessness, and 72.12% had abnormal sleep durations. Besides, a total of 16,032 individuals with an overall poor sleep pattern were identified. In general, three major disease clusters were associated with overall poor sleep status and unhealthy sleep behaviors according to the disease trajectory and network analysis, mainly in the digestive, musculoskeletal and connective tissue, and cardiometabolic systems. Of note, two circularity disease pairs (I25→I20 and I48→I50) showed the highest risks following these unhealthy sleep habits. Additionally, significant differences in disease trajectories were observed in relation to sex and sleep medication among individuals with poor sleep status. Conclusions: We identified the major disease clusters and high-risk diseases following participants with overall poor sleep health and unhealthy sleep behaviors, respectively. It may suggest the need to investigate the potential interventions targeting these key pathways.

Keywords: sleep, poor sleep, unhealthy sleep behaviors, disease trajectory, UK Biobank

Procedia PDF Downloads 65
2252 GRABTAXI: A Taxi Revolution in Thailand

Authors: Danuvasin Charoen

Abstract:

The study investigates the business process and business model of GRABTAXI. The paper also discusses how the company implemented strategies to gain competitive advantages. The data is derived from the analysis of secondary data and the in-depth interviews among staffs, taxi drivers, and key customers. The findings indicated that the company’s competitive advantages come from being the first mover, emphasising on the ease of use and tangible benefits of application, and using network effect strategy.

Keywords: taxi, mobile application, innovative business model, Thailand

Procedia PDF Downloads 284
2251 Intrusion Detection Techniques in NaaS in the Cloud: A Review

Authors: Rashid Mahmood

Abstract:

The network as a service (NaaS) usage has been well-known from the last few years in the many applications, like mission critical applications. In the NaaS, prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in NaaS. The authentication and encryption are considered the first solution of the NaaS problem whereas now these are not sufficient as NaaS use is increasing. In this paper, we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in NaaS and aim to compare in some important fields.

Keywords: IDS, cloud, naas, detection

Procedia PDF Downloads 300
2250 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning

Authors: Richard O’Riordan, Saritha Unnikrishnan

Abstract:

Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.

Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection

Procedia PDF Downloads 80
2249 High-Intensity, Short-Duration Electric Pulses Induced Action Potential in Animal Nerves

Authors: Jiahui Song, Ravindra P. Joshi

Abstract:

The use of high-intensity, short-duration electric pulses is a promising development with many biomedical applications. The uses include irreversible electroporation for killing abnormal cells, reversible poration for drug and gene delivery, neuromuscular manipulation, and the shrinkage of tumors, etc. High intensity, short-duration electric pulses result in the creation of high-density, nanometer-sized pores in the cellular membrane. This electroporation amounts to localized modulation of the transverse membrane conductance, and effectively provides a voltage shunt. The electrically controlled changes in the trans-membrane conductivity could be used to affect neural traffic and action potential propagation. A rat was taken as the representative example in this research. The simulation study shows the pathway from the sensorimotor cortex down to the spinal motoneurons, and effector muscles could be reversibly blocked by using high-intensity, short-duration electrical pulses. Also, actual experimental observations were compared against simulation predictions.

Keywords: action potential, electroporation, high-intensity, short-duration

Procedia PDF Downloads 250
2248 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 48
2247 Remote Sensing and GIS Based Methodology for Identification of Low Crop Productivity in Gautam Buddha Nagar District

Authors: Shivangi Somvanshi

Abstract:

Poor crop productivity in salt-affected environment in the country is due to insufficient and untimely canal supply to agricultural land and inefficient field water management practices. This could further degrade due to inadequate maintenance of canal network, ongoing secondary soil salinization and waterlogging, worsening of groundwater quality. Large patches of low productivity in irrigation commands are occurring due to waterlogging and salt-affected soil, particularly in the scarcity rainfall year. Satellite remote sensing has been used for mapping of areas of low crop productivity, waterlogging and salt in irrigation commands. The spatial results obtained for these problems so far are less reliable for further use due to rapid change in soil quality parameters over the years. The existing spatial databases of canal network and flow data, groundwater quality and salt-affected soil were obtained from the central and state line departments/agencies and were integrated with GIS. Therefore, an integrated methodology based on remote sensing and GIS has been developed in ArcGIS environment on the basis of canal supply status, groundwater quality, salt-affected soils, and satellite-derived vegetation index (NDVI), salinity index (NDSI) and waterlogging index (NSWI). This methodology was tested for identification and delineation of area of low productivity in the Gautam Buddha Nagar district (Uttar Pradesh). It was found that the area affected by this problem lies mainly in Dankaur and Jewar blocks of the district. The problem area was verified with ground data and was found to be approximately 78% accurate. The methodology has potential to be used in other irrigation commands in the country to obtain reliable spatial data on low crop productivity.

Keywords: remote sensing, GIS, salt affected soil, crop productivity, Gautam Buddha Nagar

Procedia PDF Downloads 270
2246 Green Crypto Mining: A Quantitative Analysis of the Profitability of Bitcoin Mining Using Excess Wind Energy

Authors: John Dorrell, Matthew Ambrosia, Abilash

Abstract:

This paper employs econometric analysis to quantify the potential profit wind farms can receive by allocating excess wind energy to power bitcoin mining machines. Cryptocurrency mining consumes a substantial amount of electricity worldwide, and wind energy produces a significant amount of energy that is lost because of the intermittent nature of the resource. Supply does not always match consumer demand. By combining the weaknesses of these two technologies, we can improve efficiency and a sustainable path to mine cryptocurrencies. This paper uses historical wind energy from the ERCOT network in Texas and cryptocurrency data from 2000-2021, to create 4-year return on investment projections. Our research model incorporates the price of bitcoin, the price of the miner, the hash rate of the miner relative to the network hash rate, the block reward, the bitcoin transaction fees awarded to the miners, the mining pool fees, the cost of the electricity and the percentage of time the miner will be running to demonstrate that wind farms generate enough excess energy to mine bitcoin profitably. Excess wind energy can be used as a financial battery, which can utilize wasted electricity by changing it into economic energy. The findings of our research determine that wind energy producers can earn profit while not taking away much if any, electricity from the grid. According to our results, Bitcoin mining could give as much as 1347% and 805% return on investment with the starting dates of November 1, 2021, and November 1, 2022, respectively, using wind farm curtailment. This paper is helpful to policymakers and investors in determining efficient and sustainable ways to power our economic future. This paper proposes a practical solution for the problem of crypto mining energy consumption and creates a more sustainable energy future for Bitcoin.

Keywords: bitcoin, mining, economics, energy

Procedia PDF Downloads 14
2245 Undersea Communications Infrastructure: Risks, Opportunities, and Geopolitical Considerations

Authors: Lori W. Gordon, Karen A. Jones

Abstract:

Today’s high-speed data connectivity depends on a vast global network of infrastructure across space, air, land, and sea, with undersea cable infrastructure (UCI) serving as the primary means for intercontinental and ‘long-haul’ communications. The UCI landscape is changing and includes an increasing variety of state actors, such as the growing economies of Brazil, Russia, India, China, and South Africa. Non-state commercial actors, such as hyper-scale content providers including Google, Facebook, Microsoft, and Amazon, are also seeking to control their data and networks through significant investments in submarine cables. Active investments by both state and non-state actors will invariably influence the growth, geopolitics, and security of this sector. Beyond these hyper-scale content providers, there are new commercial satellite communication providers. These new players include traditional geosynchronous (GEO) satellites that offer broad coverage, high throughput GEO satellites offering high capacity with spot beam technology, low earth orbit (LEO) ‘mega constellations’ – global broadband services. And potential new entrants such as High Altitude Platforms (HAPS) offer low latency connectivity, LEO constellations offer high-speed optical mesh networks, i.e., ‘fiber in the sky.’ This paper focuses on understanding the role of submarine cables within the larger context of the global data commons, spanning space, terrestrial, air, and sea networks, including an analysis of national security policy and geopolitical implications. As network operators and commercial and government stakeholders plan for emerging technologies and architectures, hedging risks for future connectivity will ensure that our data backbone will be secure for years to come.

Keywords: communications, global, infrastructure, technology

Procedia PDF Downloads 62
2244 Constructing a Probabilistic Ontology from a DBLP Data

Authors: Emna Hlel, Salma Jamousi, Abdelmajid Ben Hamadou

Abstract:

Every model for knowledge representation to model real-world applications must be able to cope with the effects of uncertain phenomena. One of main defects of classical ontology is its inability to represent and reason with uncertainty. To remedy this defect, we try to propose a method to construct probabilistic ontology for integrating uncertain information in an ontology modeling a set of basic publications DBLP (Digital Bibliography & Library Project) using a probabilistic model.

Keywords: classical ontology, probabilistic ontology, uncertainty, Bayesian network

Procedia PDF Downloads 326
2243 Bioinformatic Prediction of Hub Genes by Analysis of Signaling Pathways, Transcriptional Regulatory Networks and DNA Methylation Pattern in Colon Cancer

Authors: Ankan Roy, Niharika, Samir Kumar Patra

Abstract:

Anomalous nexus of complex topological assemblies and spatiotemporal epigenetic choreography at chromosomal territory may forms the most sophisticated regulatory layer of gene expression in cancer. Colon cancer is one of the leading malignant neoplasms of the lower gastrointestinal tract worldwide. There is still a paucity of information about the complex molecular mechanisms of colonic cancerogenesis. Bioinformatics prediction and analysis helps to identify essential genes and significant pathways for monitoring and conquering this deadly disease. The present study investigates and explores potential hub genes as biomarkers and effective therapeutic targets for colon cancer treatment. Colon cancer patient sample containing gene expression profile datasets, such as GSE44076, GSE20916, and GSE37364 were downloaded from Gene Expression Omnibus (GEO) database and thoroughly screened using the GEO2R tool and Funrich software to find out common 2 differentially expressed genes (DEGs). Other approaches, including Gene Ontology (GO) and KEGG pathway analysis, Protein-Protein Interaction (PPI) network construction and hub gene investigation, Overall Survival (OS) analysis, gene correlation analysis, methylation pattern analysis, and hub gene-Transcription factors regulatory network construction, were performed and validated using various bioinformatics tool. Initially, we identified 166 DEGs, including 68 up-regulated and 98 down-regulated genes. Up-regulated genes are mainly associated with the Cytokine-cytokine receptor interaction, IL17 signaling pathway, ECM-receptor interaction, Focal adhesion and PI3K-Akt pathway. Downregulated genes are enriched in metabolic pathways, retinol metabolism, Steroid hormone biosynthesis, and bile secretion. From the protein-protein interaction network, thirty hub genes with high connectivity are selected using the MCODE and cytoHubba plugin. Survival analysis, expression validation, correlation analysis, and methylation pattern analysis were further verified using TCGA data. Finally, we predicted COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as potential master regulators in colonic cancerogenesis. Moreover, our experimental data highlights that disruption of lipid raft and RAS/MAPK signaling cascade affects this gene hub at mRNA level. We identified COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as determinant hub genes in colon cancer progression. They can be considered as biomarkers for diagnosis and promising therapeutic targets in colon cancer treatment. Additionally, our experimental data advertise that signaling pathway act as connecting link between membrane hub and gene hub.

Keywords: hub genes, colon cancer, DNA methylation, epigenetic engineering, bioinformatic predictions

Procedia PDF Downloads 108
2242 An Inventory Management Model to Manage the Stock Level for Irregular Demand Items

Authors: Riccardo Patriarca, Giulio Di Gravio, Francesco Costantino, Massimo Tronci

Abstract:

An accurate inventory management policy acquires a crucial role in the several high-availability sectors. In these sectors, due to the high-cost of spares and backorders, an (S-1, S) replenishment policy is necessary for high-availability items. The policy enables the shipment of a substitute efficient item anytime the inventory size decreases by one. This policy can be modelled following the Multi-Echelon Technique for Recoverable Item Control (METRIC). The METRIC is a system-based technique that allows defining the optimum stock level in a multi-echelon network, adopting measures in line with the decision-maker’s perspective. The METRIC defines an availability-cost function with inventory costs and required service levels, using as inputs data about the demand trend, the supplying and maintenance characteristics of the network and the budget/availability constraints. The traditional METRIC relies on the hypothesis that a Poisson distribution well represents the demand distribution in case of items with a low failure rate. However, in this research, we will explore the effects of using a Poisson distribution to model the demand of low failure rate items characterized by an irregular demand trend. This characteristic of a demand is not included in the traditional METRIC formulation leading to the need of revising its traditional formulation. Using the CV (Coefficient of Variation) and ADI (Average inter-Demand Interval) classification, we will define the inherent flaws of Poisson-based METRIC for irregular demand items, defining an innovative ad hoc distribution which can better fit the irregular demands. This distribution will allow defining proper stock levels to reduce stocking and backorder costs due to the high irregularities in the demand trend. A case study in the aviation domain will clarify the benefits of this innovative METRIC approach.

Keywords: METRIC, inventory management, irregular demand, spare parts

Procedia PDF Downloads 328
2241 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification

Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos

Abstract:

Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.

Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology

Procedia PDF Downloads 131