Search results for: network on chip
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4833

Search results for: network on chip

843 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models

Authors: Ethan James

Abstract:

Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.

Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina

Procedia PDF Downloads 142
842 Economic Policy Promoting Economically Rational Behavior of Start-Up Entrepreneurs in Georgia

Authors: Gulnaz Erkomaishvili

Abstract:

Introduction: The pandemic and the current economic crisis have created problems for entrepreneurship and, therefore for start-up entrepreneurs. The paper presents the challenges of start-up entrepreneurs in Georgia in the time of pandemic and the analysis of the state economic policy measures. Despite many problems, the study found that in 54.2% of start-ups surveyed under the pandemic, innovation opportunities were growing. It can be stated that the pandemic was a good opportunity to increase the innovative capacity of the enterprise. 52% of the surveyed start-up entrepreneurs managed to adapt to the current situation and increase the sale of their products/services through remote channels. As for the assessment of state support measures by start-up entrepreneurs, a large number of Georgian start-ups do not assess the measures implemented by the state positively. Methodology: The research process uses methods of analysis and synthesis, quantitative and qualitative, interview/survey, grouping, relative and average values, graphing, comparison, data analysis, and others. Main Findings: Studies have shown that for the start-up entrepreneurs, the main problem remains: inaccessible funding, workers' qualifications gap, inflation, taxes, regulation, political instability, inadequate provision of infrastructure, amount of taxes, and other factors. Conclusions: The state should take the following measures to support business start-ups: create an attractive environment for investment, availability of soft loans, creation of an insurance system, infrastructure development, increase the effectiveness of tax policy (simplicity of the tax system, clarity, optimal tax level ); promote export growth (develop strategy for opening up international markets, build up a broad marketing network, etc.).

Keywords: start-up entrepreneurs, startups, start-up entrepreneurs support programs, start-up entrepreneurs support economic policy

Procedia PDF Downloads 83
841 Sulfur-Doped Hierarchically Porous Boron Nitride Nanosheets as an Efficient Carbon Dioxide Adsorbent

Authors: Sreetama Ghosh, Sundara Ramaprabhu

Abstract:

Carbon dioxide gas has been a major cause for the worldwide increase in green house effect, which leads to climate change and global warming. So CO₂ capture & sequestration has become an effective way to reduce the concentration of CO₂ in the environment. One such way to capture CO₂ in porous materials is by adsorption process. A potential material in this aspect is porous hexagonal boron nitride or 'white graphene' which is a well-known two-dimensional layered material with very high thermal stability. It had been investigated that the sample with hierarchical pore structure and high specific surface area shows excellent performance in capturing carbon dioxide gas and thereby mitigating the problem of environmental pollution to the certain extent. Besides, the presence of sulfur as well as nitrogen in the sample synergistically helps in the increase in adsorption capacity. In this work, a cost effective single step synthesis of highly porous boron nitride nanosheets doped with sulfur had been demonstrated. Besides, the CO₂ adsorption-desorption studies were carried on using a pressure reduction technique. The studies show that the nanosheets exhibit excellent cyclic stability in storage performance. Thermodynamic studies suggest that the adsorption takes place mainly through physisorption. The studies show that the nanosheets exhibit excellent cyclic stability in storage performance. Further, the surface modification of the highly porous nano sheets carried out by incorporating ionic liquids had further enhanced the capturing capability of CO₂ gas in the nanocomposite, revealing that this particular material has the potential to be an excellent adsorbent of carbon dioxide gas.

Keywords: CO₂ capture, hexagonal boron nitride nanosheets, porous network, sulfur doping

Procedia PDF Downloads 217
840 Bayesian System and Copula for Event Detection and Summarization of Soccer Videos

Authors: Dhanuja S. Patil, Sanjay B. Waykar

Abstract:

Event detection is a standout amongst the most key parts for distinctive sorts of area applications of video data framework. Recently, it has picked up an extensive interest of experts and in scholastics from different zones. While detecting video event has been the subject of broad study efforts recently, impressively less existing methodology has considered multi-model data and issues related efficiency. Start of soccer matches different doubtful circumstances rise that can't be effectively judged by the referee committee. A framework that checks objectively image arrangements would prevent not right interpretations because of some errors, or high velocity of the events. Bayesian networks give a structure for dealing with this vulnerability using an essential graphical structure likewise the probability analytics. We propose an efficient structure for analysing and summarization of soccer videos utilizing object-based features. The proposed work utilizes the t-cherry junction tree, an exceptionally recent advancement in probabilistic graphical models, to create a compact representation and great approximation intractable model for client’s relationships in an interpersonal organization. There are various advantages in this approach firstly; the t-cherry gives best approximation by means of junction trees class. Secondly, to construct a t-cherry junction tree can be to a great extent parallelized; and at last inference can be performed utilizing distributed computation. Examination results demonstrates the effectiveness, adequacy, and the strength of the proposed work which is shown over a far reaching information set, comprising more soccer feature, caught at better places.

Keywords: summarization, detection, Bayesian network, t-cherry tree

Procedia PDF Downloads 293
839 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 31
838 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals

Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou

Abstract:

In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.

Keywords: continuous wavelet transform, convolution neural net-work, gated recurrent unit, health indicators, remaining useful life

Procedia PDF Downloads 98
837 A Mega-Analysis of the Predictive Power of Initial Contact within Minimal Social Network

Authors: Cathal Ffrench, Ryan Barrett, Mike Quayle

Abstract:

It is accepted in social psychology that categorization leads to ingroup favoritism, without further thought given to the processes that may co-occur or even precede categorization. These categorizations move away from the conceptualization of the self as a unique social being toward an increasingly collective identity. Subsequently, many individuals derive much of their self-evaluations from these collective identities. The seminal literature on this topic argues that it is primarily categorization that evokes instances of ingroup favoritism. Apropos to these theories, we argue that categorization acts to enhance and further intergroup processes rather than defining them. More accurately, we propose categorization aids initial ingroup contact and this first contact is predictive of subsequent favoritism on individual and collective levels. This analysis focuses on Virtual Interaction APPLication (VIAPPL) based studies, a software interface that builds on the flaws of the original minimal group studies. The VIAPPL allows the exchange of tokens in an intra and inter-group manner. This token exchange is how we classified the first contact. The study involves binary longitudinal analysis to better understand the subsequent exchanges of individuals based on who they first interacted with. Studies were selected on the criteria of evidence of explicit first interactions and two-group designs. Our findings paint a compelling picture in support of a motivated contact hypothesis, which suggests that an individual’s first motivated contact toward another has strong predictive capabilities for future behavior. This contact can lead to habit formation and specific favoritism towards individuals where contact has been established. This has important implications for understanding how group conflict occurs, and how intra-group individual bias can develop.

Keywords: categorization, group dynamics, initial contact, minimal social networks, momentary contact

Procedia PDF Downloads 121
836 LGR5 and Downstream Intracellular Signaling Proteins Play Critical Roles in the Cell Proliferation of Neuroblastoma, Meningioma and Pituitary Adenoma

Authors: Jin Hwan Cheong, Mina Hwang, Myung Hoon Han, Je Il Ryu, Young ha Oh, Seong Ho Koh, Wu Duck Won, Byung Jin Ha

Abstract:

Leucine-rich repeat-containing G-protein coupled receptor 5 (LGR5) has been reported to play critical roles in the proliferation of various cancer cells. However, the roles of LGR5 in brain tumors and the specific intracellular signaling proteins directly associated with it remain unknown. Expression of LGR5 was first measured in normal brain tissue, meningioma, and pituitary adenoma of humans. To identify the downstream signaling pathways of LGR5, siRNA-mediated knockdown of LGR5 was performed in SH-SY5Y neuroblastoma cells followed by proteomics analysis with 2-dimensional polyacrylamide gel electrophoresis (2D-PAGE). In addition, the expression of LGR5-associated proteins was evaluated in LGR5-inꠓhibited neuroblastoma cells and in human normal brain, meningioma, and pituitary adenoma tissue. Proteomics analysis showed 12 protein spots were significantly different in expression level (more than two-fold change) and subsequently identified by peptide mass fingerprinting. A protein association network was constructed from the 12 identified proteins altered by LGR5 knockdown. Direct and indirect interactions were identified among the 12 proteins. HSP 90-beta was one of the proteins whose expression was altered by LGR5 knockdown. Likewise, we observed decreased expression of proteins in the hnRNP subfamily following LGR5 knockdown. In addition, we have for the first time identified significantly higher hnRNP family expression in meningioma and pituitary adenoma compared to normal brain tissue. Taken together, LGR5 and its downstream sigꠓnaling play critical roles in neuroblastoma and brain tumors such as meningioma and pituitary adenoma.

Keywords: LGR5, neuroblastoma, meningioma, pituitary adenoma, hnRNP

Procedia PDF Downloads 19
835 TessPy – Spatial Tessellation Made Easy

Authors: Jonas Hamann, Siavash Saki, Tobias Hagen

Abstract:

Discretization of urban areas is a crucial aspect in many spatial analyses. The process of discretization of space into subspaces without overlaps and gaps is called tessellation. It helps understanding spatial space and provides a framework for analyzing geospatial data. Tessellation methods can be divided into two groups: regular tessellations and irregular tessellations. While regular tessellation methods, like squares-grids or hexagons-grids, are suitable for addressing pure geometry problems, they cannot take the unique characteristics of different subareas into account. However, irregular tessellation methods allow the border between the subareas to be defined more realistically based on urban features like a road network or Points of Interest (POI). Even though Python is one of the most used programming languages when it comes to spatial analysis, there is currently no library that combines different tessellation methods to enable users and researchers to compare different techniques. To close this gap, we are proposing TessPy, an open-source Python package, which combines all above-mentioned tessellation methods and makes them easily accessible to everyone. The core functions of TessPy represent the five different tessellation methods: squares, hexagons, adaptive squares, Voronoi polygons, and city blocks. By using regular methods, users can set the resolution of the tessellation which defines the finesse of the discretization and the desired number of tiles. Irregular tessellation methods allow users to define which spatial data to consider (e.g., amenity, building, office) and how fine the tessellation should be. The spatial data used is open-source and provided by OpenStreetMap. This data can be easily extracted and used for further analyses. Besides the methodology of the different techniques, the state-of-the-art, including examples and future work, will be discussed. All dependencies can be installed using conda or pip; however, the former is more recommended.

Keywords: geospatial data science, geospatial data analysis, tessellations, urban studies

Procedia PDF Downloads 93
834 Human-Centred Data Analysis Method for Future Design of Residential Spaces: Coliving Case Study

Authors: Alicia Regodon Puyalto, Alfonso Garcia-Santos

Abstract:

This article presents a method to analyze the use of indoor spaces based on data analytics obtained from inbuilt digital devices. The study uses the data generated by the in-place devices, such as smart locks, Wi-Fi routers, and electrical sensors, to gain additional insights on space occupancy, user behaviour, and comfort. Those devices, originally installed to facilitate remote operations, report data through the internet that the research uses to analyze information on human real-time use of spaces. Using an in-place Internet of Things (IoT) network enables a faster, more affordable, seamless, and scalable solution to analyze building interior spaces without incorporating external data collection systems such as sensors. The methodology is applied to a real case study of coliving, a residential building of 3000m², 7 floors, and 80 users in the centre of Madrid. The case study applies the method to classify IoT devices, assess, clean, and analyze collected data based on the analysis framework. The information is collected remotely, through the different platforms devices' platforms; the first step is to curate the data, understand what insights can be provided from each device according to the objectives of the study, this generates an analysis framework to be escalated for future building assessment even beyond the residential sector. The method will adjust the parameters to be analyzed tailored to the dataset available in the IoT of each building. The research demonstrates how human-centered data analytics can improve the future spatial design of indoor spaces.

Keywords: in-place devices, IoT, human-centred data-analytics, spatial design

Procedia PDF Downloads 169
833 Soccer, a Major Social Changing Factor: Kosovo Case

Authors: Armend Kelmendi, Adnan Ahmeti

Abstract:

The purpose of our study was to assess the impact of soccer in the overall wealth fare (education, health, and economic prosperity) of youth in Kosovo (age: 7-18). The research conducted measured a number of parameters (training methodologies, conditions, community leadership impact) in a sample consisting of 6 different football clubs’ academies across the country. Fifty (50) male and female football youngsters volunteered in this study. To generate more reliable results, the analysis was conducted with the help of a set of effective project management tools and techniques (Gantt chart, Logic Network, PERT chart, Work Breakdown Structure, and Budgeting Analysis). The interviewees were interviewed under a specific lens of categories (impact in education, health, and economic prosperity). A set of questions were asked i.e. what has football provided to you and the community you live in?; Did football increase your confidence and shaped your life for better?; What was the main reason you started training in football? The results generated explain how a single sport, namely that of football in Kosovo can make a huge social change, improving key social factors in a society. There was a considerable difference between the youth clubs as far as training conditions are concerned. The study found out that despite financial constraints, two out of six clubs managed to produce twice as more talented players that were introduced to professional primary league teams in Kosovo and Albania, including other soccer teams in the region, Europe, and Asia. The study indicates that better sports policy must be formulated and associated with important financial investments in soccer for it to be considered fruitful and beneficial for players of 18 plus years of age, namely professionals.

Keywords: youth, prosperity, conditions, investments, growth, free movement

Procedia PDF Downloads 211
832 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models

Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana

Abstract:

The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.

Keywords: electricity demand forecasting, load shedding, demand side management, data science

Procedia PDF Downloads 19
831 Evolution of the Skid-Resistance of Road Surfaces Based on Snow Removal Operations

Authors: Garance Liaboeuf, Alain Le Bot, Ali Daouadji, Antoine Martinet, Mohamed Bouteldja, Nicolas Grignard, Damien Pilet

Abstract:

As every French road operator, Autoroutes et Tunnel du Mont Blanc (ATMB) conducts annual inspections of its infrastructures. Significant loss of skid resistance has been observed on some sections of his network, sometimes on recent pavements. In the way of making the mechanisms and factors that can lead to this loss of grip more understandable, ATMB has launched a study aimed at identifying the causes and developing solutions to prevent this phenomenon. To quantify the deterioration of different road surfaces subjected to controlled scraping with steel blades of snow removal machines, a field campaign was conducted. These operations are carried out during the winter period according to a strict protocol. In order to correct the skid-resistance values, a control panel is set up. In this way, only the effect of the scraping is taken into account. It allows us to exclude the influence of the environment (temperature, humidity, etc.) and of the surface state on the skid resistance values measured during the different sessions. Skid measurements after eight years of scraping cycles showed a 7% to 13% decrease in microtexture and a 2% to 12% decrease in macrotexture adhesion. These reductions are attributed to the phenomenon of polishing road surfaces by using steel blades. Also, regeneration phenomena occur after a certain number of blades pass. Differences in steel scraper blade resistance appear related to the intrinsic properties of the aggregate used in the pavement formulation and its type. Finally, the results of this campaign will allow the determination of the resistance characteristics of the pavement damage by the steel blade of snow plows. An evolution law of the surface condition as a function of the scraping operations will also be developed from this study.

Keywords: pavement, skid, surface, snow plow

Procedia PDF Downloads 15
830 Ground Short Circuit Contributions of a MV Distribution Line Equipped with PWMSC

Authors: Mohamed Zellagui, Heba Ahmed Hassan

Abstract:

This paper proposes a new approach for the calculation of short-circuit parameters in the presence of Pulse Width Modulated based Series Compensator (PWMSC). PWMSC is a newly Flexible Alternating Current Transmission System (FACTS) device that can modulate the impedance of a transmission line through applying a variation to the duty cycle (D) of a train of pulses with fixed frequency. This results in an improvement of the system performance as it provides virtual compensation of distribution line impedance by injecting controllable apparent reactance in series with the distribution line. This controllable reactance can operate in both capacitive and inductive modes and this makes PWMSC highly effective in controlling the power flow and increasing system stability in the system. The purpose of this work is to study the impact of fault resistance (RF) which varies between 0 to 30 Ω on the fault current calculations in case of a ground fault and a fixed fault location. The case study is for a medium voltage (MV) Algerian distribution line which is compensated by PWMSC in the 30 kV Algerian distribution power network. The analysis is based on symmetrical components method which involves the calculations of symmetrical components of currents and voltages, without and with PWMSC in both cases of maximum and minimum duty cycle value for capacitive and inductive modes. The paper presents simulation results which are verified by the theoretical analysis.

Keywords: pulse width modulated series compensator (pwmsc), duty cycle, distribution line, short-circuit calculations, ground fault, symmetrical components method

Procedia PDF Downloads 471
829 Linking Adaptation to Climate Change and Sustainable Development: The Case of ClimAdaPT.Local in Portugal

Authors: A. F. Alves, L. Schmidt, J. Ferrao

Abstract:

Portugal is one of the more vulnerable European countries to the impacts of climate change. These include: temperature increase; coastal sea level rise; desertification and drought in the countryside; and frequent and intense extreme weather events. Hence, adaptation strategies to climate change are of great importance. This is what was addressed by ClimAdaPT.Local. This policy-oriented project had the main goal of developing 26 Municipal Adaptation Strategies for Climate Change, through the identification of local specific present and future vulnerabilities, the training of municipal officials, and the engagement of local communities. It is intended to be replicated throughout the whole territory and to stimulate the creation of a national network of local adaptation in Portugal. Supported by methodologies and tools specifically developed for this project, our paper is based on the surveys, training and stakeholder engagement workshops implemented at municipal level. In an 'adaptation-as-learning' process, these tools functioned as a social-learning platform and an exercise in knowledge and policy co-production. The results allowed us to explore the nature of local vulnerabilities and the exposure of gaps in the context of reappraisal of both future climate change adaptation opportunities and possible dysfunctionalities in the governance arrangements of municipal Portugal. Development issues are highlighted when we address the sectors and social groups that are both more sensitive and more vulnerable to the impacts of climate change. We argue that a pluralistic dialogue and a common framing can be established between them, with great potential for transformational adaptation. Observed climate change, present-day climate variability and future expectations of change are great societal challenges which should be understood in the context of the sustainable development agenda.

Keywords: adaptation, ClimAdaPT.Local, climate change, Portugal, sustainable development

Procedia PDF Downloads 162
828 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network

Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka

Abstract:

Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.

Keywords: aggregation, consumption, data gathering, efficiency

Procedia PDF Downloads 461
827 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 53
826 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 237
825 Cognitive Science Based Scheduling in Grid Environment

Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya

Abstract:

Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.

Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence

Procedia PDF Downloads 364
824 Security Report Profiling for Mobile Banking Applications in Indonesia Based on OWASP Mobile Top 10-2016

Authors: Bambang Novianto, Rizal Aditya Herdianto, Raphael Bianco Huwae, Afifah, Alfonso Brolin Sihite, Rudi Lumanto

Abstract:

The mobile banking application is a type of mobile application that is growing rapidly. This is caused by the ease of service and time savings in making transactions. On the other hand, this certainly provides a challenge in security issues. The use of mobile banking can not be separated from cyberattacks that may occur which can result the theft of sensitive information or financial loss. The financial loss and the theft of sensitive information is the most avoided thing because besides harming the user, it can also cause a loss of customer trust in a bank. Cyberattacks that are often carried out against mobile applications are phishing, hacking, theft, misuse of data, etc. Cyberattack can occur when a vulnerability is successfully exploited. OWASP mobile Top 10 has recorded as many as 10 vulnerabilities that are most commonly found in mobile applications. In the others, android permissions also have the potential to cause vulnerabilities. Therefore, an overview of the profile of the mobile banking application becomes an urgency that needs to be known. So that it is expected to be a consideration of the parties involved for improving security. In this study, an experiment has been conducted to capture the profile of the mobile banking applications in Indonesia based on android permission and OWASP mobile top 10 2016. The results show that there are six basic vulnerabilities based on OWASP Mobile Top 10 that are most commonly found in mobile banking applications in Indonesia, i.e. M1:Improper Platform Usage, M2:Insecure Data Storage, M3:Insecure Communication, M5:Insufficient Cryptography, M7:Client Code Quality, and M9:Reverse Engineering. The most permitted android permissions are the internet, status network access, and telephone read status.

Keywords: mobile banking application, OWASP mobile top 10 2016, android permission, sensitive information, financial loss

Procedia PDF Downloads 111
823 Urine Neutrophil Gelatinase-Associated Lipocalin as an Early Marker of Acute Kidney Injury in Hematopoietic Stem Cell Transplantation Patients

Authors: Sara Ataei, Maryam Taghizadeh-Ghehi, Amir Sarayani, Asieh Ashouri, Amirhossein Moslehi, Molouk Hadjibabaie, Kheirollah Gholami

Abstract:

Background: Acute kidney injury (AKI) is common in hematopoietic stem cell transplantation (HSCT) patients with an incidence of 21–73%. Prevention and early diagnosis reduces the frequency and severity of this complication. Predictive biomarkers are of major importance to timely diagnosis. Neutrophil gelatinase associated lipocalin (NGAL) is a widely investigated novel biomarker for early diagnosis of AKI. However, no study assessed NGAL for AKI diagnosis in HSCT patients. Methods: We performed further analyses on gathered data from our recent trial to evaluate the performance of urine NGAL (uNGAL) as an indicator of AKI in 72 allogeneic HSCT patients. AKI diagnosis and severity were assessed using Risk–Injury–Failure–Loss–End-stage renal disease and AKI Network criteria. We assessed uNGAL on days -6, -3, +3, +9 and +15. Results: Time-dependent Cox regression analysis revealed a statistically significant relationship between uNGAL and AKI occurrence. (HR=1.04 (1.008-1.07), P=0.01). There was a relation between uNGAL day +9 to baseline ratio and incidence of AKI (unadjusted HR=.1.047(1.012-1.083), P<0.01). The area under the receiver-operating characteristic curve for day +9 to baseline ratio was 0.86 (0.74-0.99, P<0.01) and a cut-off value of 2.62 was 85% sensitive and 83% specific in predicting AKI. Conclusions: Our results indicated that increase in uNGAL augmented the risk of AKI and the changes of day +9 uNGAL concentrations from baseline could be of value for predicting AKI in HSCT patients. Additionally uNGAL changes preceded serum creatinine rises by nearly 2 days.

Keywords: acute kidney injury, hemtopoietic stem cell transplantation, neutrophil gelatinase-associated lipocalin, Receiver-operating characteristic curve

Procedia PDF Downloads 374
822 Investigate the Rural Mobility and Accessibility Challenges of Seniors

Authors: Tom Ryan

Abstract:

This paper investigates the rural mobility and accessibility challenges of a specific target group - Seniors. The target group is those over 66 years of age who are entitled to use the Public Transport (PT) Free Travel Scheme in rural Ireland. The paper explores at a high level some of the projected rural PT challenges and requirements over the next 10-15 years, noting that statistical predictions show that there will be a significant population demographic shift within the Senior's age profile. Using the PESTEL framework, the literature review explored existing research concerning mobility, accessibility challenges, and the opportunities Seniors face. Twenty-seven qualitative in-depth interviews with stakeholders within the ecosystem were undertaken. The stakeholders included: rural PT customers, Local-Link managers, NTA senior management, a Minister of State, and a European parliament policymaker. Tier 1 interviewee feedback spotlights that the PT network system does not exist for rural patients to access hospital facilities. There was no evidence from the Tier 2 research findings to show that health policymakers and transport planners are working to deliver a national solution to support patients getting access to hospital appointments. Several research interviewees discussed the theme of isolation and the perceived stigma of senior males utilising PT. The findings indicated that MaaS is potentially revolutionary in the PT arena. Finally, this paper suggests several short-, medium- and long-term recommendations based on the research findings. These recommendations are a potential springboard to ensure that rural PT is suitable for future Irish generations.

Keywords: accessibility, active ageing, car dependence, isolation, seniors health issues, behavioural changes, environmental challenges, internet of things, demand responsive, mobility as a service

Procedia PDF Downloads 78
821 Study of Structural Behavior and Proton Conductivity of Inorganic Gel Paste Electrolyte at Various Phosphorous to Silicon Ratio by Multiscale Modelling

Authors: P. Haldar, P. Ghosh, S. Ghoshdastidar, K. Kargupta

Abstract:

In polymer electrolyte membrane fuel cells (PEMFC), the membrane electrode assembly (MEA) is consisting of two platinum coated carbon electrodes, sandwiched with one proton conducting phosphoric acid doped polymeric membrane. Due to low mechanical stability, flooding and fuel cell crossover, application of phosphoric acid in polymeric membrane is very critical. Phosphorous and silica based 3D inorganic gel gains the attention in the field of supercapacitors, fuel cells and metal hydrate batteries due to its thermally stable highly proton conductive behavior. Also as a large amount of water molecule and phosphoric acid can easily get trapped in Si-O-Si network cavities, it causes a prevention in the leaching out. In this study, we have performed molecular dynamics (MD) simulation and first principle calculations to understand the structural, electronics and electrochemical and morphological behavior of this inorganic gel at various P to Si ratios. We have used dipole-dipole interactions, H bonding, and van der Waals forces to study the main interactions between the molecules. A 'structure property-performance' mapping is initiated to determine optimum P to Si ratio for best proton conductivity. We have performed the MD simulations at various temperature to understand the temperature dependency on proton conductivity. The observed results will propose a model which fits well with experimental data and other literature values. We have also studied the mechanism behind proton conductivity. And finally we have proposed a structure for the gel paste with optimum P to Si ratio.

Keywords: first principle calculation, molecular dynamics simulation, phosphorous and silica based 3D inorganic gel, polymer electrolyte membrane fuel cells, proton conductivity

Procedia PDF Downloads 96
820 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study

Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin

Abstract:

Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.

Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)

Procedia PDF Downloads 574
819 Reinforcement-Learning Based Handover Optimization for Cellular Unmanned Aerial Vehicles Connectivity

Authors: Mahmoud Almasri, Xavier Marjou, Fanny Parzysz

Abstract:

The demand for services provided by Unmanned Aerial Vehicles (UAVs) is increasing pervasively across several sectors including potential public safety, economic, and delivery services. As the number of applications using UAVs grows rapidly, more and more powerful, quality of service, and power efficient computing units are necessary. Recently, cellular technology draws more attention to connectivity that can ensure reliable and flexible communications services for UAVs. In cellular technology, flying with a high speed and altitude is subject to several key challenges, such as frequent handovers (HOs), high interference levels, connectivity coverage holes, etc. Additional HOs may lead to “ping-pong” between the UAVs and the serving cells resulting in a decrease of the quality of service and energy consumption. In order to optimize the number of HOs, we develop in this paper a Q-learning-based algorithm. While existing works focus on adjusting the number of HOs in a static network topology, we take into account the impact of cells deployment for three different simulation scenarios (Rural, Semi-rural and Urban areas). We also consider the impact of the decision distance, where the drone has the choice to make a switching decision on the number of HOs. Our results show that a Q-learning-based algorithm allows to significantly reduce the average number of HOs compared to a baseline case where the drone always selects the cell with the highest received signal. Moreover, we also propose which hyper-parameters have the largest impact on the number of HOs in the three tested environments, i.e. Rural, Semi-rural, or Urban.

Keywords: drones connectivity, reinforcement learning, handovers optimization, decision distance

Procedia PDF Downloads 70
818 Technical Sustainable Management: An Instrument to Increase Energy Efficiency in Wastewater Treatment Plants, a Case Study in Jordan

Authors: Dirk Winkler, Leon Koevener, Lamees AlHayary

Abstract:

This paper contributes to the improvement of the municipal wastewater systems in Jordan. An important goal is increased energy efficiency in wastewater treatment plants and therefore lower expenses due to reduced electricity consumption. The chosen way to achieve this goal is through the implementation of Technical Sustainable Management adapted to the Jordanian context. Three wastewater treatment plants in Jordan have been chosen as a case study for the investigation. These choices were supported by the fact that the three treatment plants are suitable for average performance and size. Beyond that, an energy assessment has been recently conducted in those facilities. The project succeeded in proving the following hypothesis: Energy efficiency in wastewater treatment plants can be improved by implementing principles of Technical Sustainable Management adapted to the Jordanian context. With this case study, a significant increase in energy efficiency can be achieved by optimization of operational performance, identifying and eliminating shortcomings and appropriate plant management. Implementing Technical Sustainable Management as a low-cost tool with a comparable little workload, provides several additional benefits supplementing increased energy efficiency, including compliance with all legal and technical requirements, process optimization, but also increased work safety and convenient working conditions. The research in the chosen field continues because there are indications for possible integration of the adapted tool into other regions and sectors. The concept of Technical Sustainable Management adapted to the Jordanian context could be extended to other wastewater treatment plants in all regions of Jordan but also into other sectors including water treatment, water distribution, wastewater network, desalination, or chemical industry.

Keywords: energy efficiency, quality management system, technical sustainable management, wastewater treatment

Procedia PDF Downloads 126
817 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 62
816 Comparison of Blockchain Ecosystem for Identity Management

Authors: K. S. Suganya, R. Nedunchezhian

Abstract:

In recent years, blockchain technology has been found to be the most significant discovery in this digital era, after the discovery of the Internet and Cloud Computing. Blockchain is a simple, distributed public ledger that contains all the user’s transaction details in a block. The global copy of the block is then shared among all its peer-peer network users after validation by the Blockchain miners. Once a block is validated and accepted, it cannot be altered by any users making it a trust-free transaction. It also resolves the problem of double-spending by using traditional cryptographic methods. Since the advent of bitcoin, blockchain has been the backbone for all its transactions. But in recent years, it has found its roots and uses in many fields like Smart Contracts, Smart City management, healthcare, etc. Identity management against digital identity theft has become a major concern among financial and other organizations. To solve this digital identity theft, blockchain technology can be employed with existing identity management systems, which maintain a distributed public ledger containing details of an individual’s identity containing information such as Digital birth certificates, Citizenship number, Bank details, voter details, driving license in the form of blocks verified on the blockchain becomes time-stamped, unforgeable and publicly visible for any legitimate users. The main challenge in using blockchain technology to prevent digital identity theft is ensuring the pseudo-anonymity and privacy of the users. This survey paper will exert to study the blockchain concepts, consensus protocols, and various blockchain-based Digital Identity Management systems with their research scope. This paper also discusses the role of Blockchain in COVID-19 pandemic management by self-sovereign identity and supply chain management.

Keywords: blockchain, consensus protocols, bitcoin, identity theft, digital identity management, pandemic, COVID-19, self-sovereign identity

Procedia PDF Downloads 92
815 Research on Intercity Travel Mode Choice Behavior Considering Traveler’s Heterogeneity and Psychological Latent Variables

Authors: Yue Huang, Hongcheng Gan

Abstract:

The new urbanization pattern has led to a rapid growth in demand for short-distance intercity travel, and the emergence of new travel modes has also increased the variety of intercity travel options. In previous studies on intercity travel mode choice behavior, the impact of functional amenities of travel mode and travelers’ long-term personality characteristics has rarely been considered, and empirical results have typically been calibrated using revealed preference (RP) or stated preference (SP) data. This study designed a questionnaire that combines the RP and SP experiment from the perspective of a trip chain combining inner-city and intercity mobility, with consideration for the actual condition of the Huainan-Hefei traffic corridor. On the basis of RP/SP fusion data, a hybrid choice model considering both random taste heterogeneity and psychological characteristics was established to investigate travelers’ mode choice behavior for traditional train, high-speed rail, intercity bus, private car, and intercity online car-hailing. The findings show that intercity time and cost exert the greatest influence on mode choice, with significant heterogeneity across the population. Although inner-city cost does not demonstrate a significant influence, inner-city time plays an important role. Service attributes of travel mode, such as catering and hygiene services, as well as free wireless network supply, only play a minor role in mode selection. Finally, our study demonstrates that safety-seeking tendency, hedonism, and introversion all have differential and significant effects on intercity travel mode choice.

Keywords: intercity travel mode choice, stated preference survey, hybrid choice model, RP/SP fusion data, psychological latent variable, heterogeneity

Procedia PDF Downloads 74
814 New Platform of Biobased Aromatic Building Blocks for Polymers

Authors: Sylvain Caillol, Maxence Fache, Bernard Boutevin

Abstract:

Recent years have witnessed an increasing demand on renewable resource-derived polymers owing to increasing environmental concern and restricted availability of petrochemical resources. Thus, a great deal of attention was paid to renewable resources-derived polymers and to thermosetting materials especially, since they are crosslinked polymers and thus cannot be recycled. Also, most of thermosetting materials contain aromatic monomers, able to confer high mechanical and thermal properties to the network. Therefore, the access to biobased, non-harmful, and available aromatic monomers is one of the main challenges of the years to come. Starting from phenols available in large volumes from renewable resources, our team designed platforms of chemicals usable for the synthesis of various polymers. One of these phenols, vanillin, which is readily available from lignin, was more specifically studied. Various aromatic building blocks bearing polymerizable functions were synthesized: epoxy, amine, acid, carbonate, alcohol etc. These vanillin-based monomers can potentially lead to numerous polymers. The example of epoxy thermosets was taken, as there is also the problematic of bisphenol A substitution for these polymers. Materials were prepared from the biobased epoxy monomers obtained from vanillin. Their thermo-mechanical properties were investigated and the effect of the monomer structure was discussed. The properties of the materials prepared were found to be comparable to the current industrial reference, indicating a potential replacement of petrosourced, bisphenol A-based epoxy thermosets by biosourced, vanillin-based ones. The tunability of the final properties was achieved through the choice of monomer and through a well-controlled oligomerization reaction of these monomers. This follows the same strategy than the one currently used in industry, which supports the potential of these vanillin-derived epoxy thermosets as substitutes of their petro-based counterparts.

Keywords: lignin, vanillin, epoxy, amine, carbonate

Procedia PDF Downloads 200