Search results for: covering machine
1956 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 1491955 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes
Authors: Madushani Rodrigo, Banuka Athuraliya
Abstract:
In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16
Procedia PDF Downloads 1241954 To Handle Data-Driven Software Development Projects Effectively
Authors: Shahnewaz Khan
Abstract:
Machine learning (ML) techniques are often used in projects for creating data-driven applications. These tasks typically demand additional research and analysis. The proper technique and strategy must be chosen to ensure the success of data-driven projects. Otherwise, even exerting a lot of effort, the necessary development might not always be possible. In this post, an effort to examine the workflow of data-driven software development projects and its implementation process in order to describe how to manage a project successfully. Which will assist in minimizing the added workload.Keywords: data, data-driven projects, data science, NLP, software project
Procedia PDF Downloads 841953 21st Century Gunboat Diplomacy and Strategic Sea Areas
Authors: Mustafa Avsever
Abstract:
Throughout history, states have attached great importance to seas in terms of economic and security. Advanced civilizations have always founded in coastal regions. Over time, human being has tended to trade and naturally always aimed get more and more. Seas by covering 71% of the earth, provide the greatest economic opportunities for access to raw material resources and the world market. As a result, seas have become the most important areas of conflict over the course of time. Coastal states, use seas as a tool for defense zone, trade, marine transportation and power transfer, they have acquired colonies overseas and increased their capital, raw materials and labor. Societies, have increased their economic prosperity, though their navies in order to retain their welfare and achieve their foreign policy objectives. Sometimes they have imposed their demands through the use or threat of limited naval force in accordance with their interests that is gunboat diplomacy. Today we can see samples of gunboat diplomacy used in the Eastern Mediterranean, during Ukraine crisis, in dispute between North Korea and South Korea and the ongoing power struggle in Asia-Pacific. Gunboat diplomacy has been and continues to be applied consistently in solving problems by the stronger side of the problem. The purpose of this article is to examine using navy under the gunboat diplomacy as an active instrument of foreign policy and security policy and reveal the strategic sea areas in which gunboat diplomacy is used effectively in the matrix of international politics in the 21st century.Keywords: gunboat diplomacy, maritime strategy, sea power, strategic sea lands
Procedia PDF Downloads 4331952 Perception of Secondary Schools’ Students on Computer Education in Federal Capital Territory (FCT-Abuja), Nigeria
Authors: Salako Emmanuel Adekunle
Abstract:
Computer education is referred to as the knowledge and ability to use computers and related technology efficiently, with a range of skills covering levels from basic use to advance. Computer continues to make an ever-increasing impact on all aspect of human endeavours such as education. With numerous benefits of computer education, what are the insights of students on computer education? This study investigated the perception of senior secondary school students on computer education in Federal Capital Territory (FCT), Abuja, Nigeria. A sample of 7500 senior secondary schools students was involved in the study, one hundred (100) private and fifty (50) public schools within FCT. They were selected by using simple random sampling technique. A questionnaire [PSSSCEQ] was developed and validated through expert judgement and reliability co-efficient of 0.84 was obtained. It was used to gather relevant data on computer education. Findings confirmed that the students in the FCT had positive perception on computer education. Some factors were identified that affect students’ perception on computer education. The null hypotheses were tested using t-test and ANOVA statistical analyses at 0.05 level of significance. Based on these findings, some recommendations were made which include competent teachers should be employed into all secondary schools; this will help students to acquire relevant knowledge in computer education, technological supports should be provided to all secondary schools; this will help the users (students) to solve specific problems in computer education and financial supports should be provided to procure computer facilities that will enhance the teaching and the learning of computer education.Keywords: computer education, perception, secondary school, students
Procedia PDF Downloads 4681951 Test Suite Optimization Using an Effective Meta-Heuristic BAT Algorithm
Authors: Anuradha Chug, Sunali Gandhi
Abstract:
Regression Testing is a very expensive and time-consuming process carried out to ensure the validity of modified software. Due to the availability of insufficient resources to re-execute all the test cases in time constrained environment, efforts are going on to generate test data automatically without human efforts. Many search based techniques have been proposed to generate efficient, effective as well as optimized test data, so that the overall cost of the software testing can be minimized. The generated test data should be able to uncover all potential lapses that exist in the software or product. Inspired from the natural behavior of bat for searching her food sources, current study employed a meta-heuristic, search-based bat algorithm for optimizing the test data on the basis certain parameters without compromising their effectiveness. Mathematical functions are also applied that can effectively filter out the redundant test data. As many as 50 Java programs are used to check the effectiveness of proposed test data generation and it has been found that 86% saving in testing efforts can be achieved using bat algorithm while covering 100% of the software code for testing. Bat algorithm was found to be more efficient in terms of simplicity and flexibility when the results were compared with another nature inspired algorithms such as Firefly Algorithm (FA), Hill Climbing Algorithm (HC) and Ant Colony Optimization (ACO). The output of this study would be useful to testers as they can achieve 100% path coverage for testing with minimum number of test cases.Keywords: regression testing, test case selection, test case prioritization, genetic algorithm, bat algorithm
Procedia PDF Downloads 3821950 Algorithm for Improved Tree Counting and Detection through Adaptive Machine Learning Approach with the Integration of Watershed Transformation and Local Maxima Analysis
Authors: Jigg Pelayo, Ricardo Villar
Abstract:
The Philippines is long considered as a valuable producer of high value crops globally. The country’s employment and economy have been dependent on agriculture, thus increasing its demand for the efficient agricultural mechanism. Remote sensing and geographic information technology have proven to effectively provide applications for precision agriculture through image-processing technique considering the development of the aerial scanning technology in the country. Accurate information concerning the spatial correlation within the field is very important for precision farming of high value crops, especially. The availability of height information and high spatial resolution images obtained from aerial scanning together with the development of new image analysis methods are offering relevant influence to precision agriculture techniques and applications. In this study, an algorithm was developed and implemented to detect and count high value crops simultaneously through adaptive scaling of support vector machine (SVM) algorithm subjected to object-oriented approach combining watershed transformation and local maxima filter in enhancing tree counting and detection. The methodology is compared to cutting-edge template matching algorithm procedures to demonstrate its effectiveness on a demanding tree is counting recognition and delineation problem. Since common data and image processing techniques are utilized, thus can be easily implemented in production processes to cover large agricultural areas. The algorithm is tested on high value crops like Palm, Mango and Coconut located in Misamis Oriental, Philippines - showing a good performance in particular for young adult and adult trees, significantly 90% above. The s inventories or database updating, allowing for the reduction of field work and manual interpretation tasks.Keywords: high value crop, LiDAR, OBIA, precision agriculture
Procedia PDF Downloads 4021949 Change Detection of Vegetative Areas Using Land Use Land Cover of Desertification Vulnerable Areas in Nigeria
Authors: T. Garba, Y. Y. Sabo A. Babanyara, K. G. Ilellah, A. K. Mutari
Abstract:
This study used the Normalized Difference Vegetation Index (NDVI) and maps compiled from the classification of Landsat TM and Landsat ETM images of 1986 and 1999 respectively and Nigeria sat 1 images of 2007 to quantify changes in land use and land cover in selected areas of Nigeria covering 143,609 hectares that are threatened by the encroaching Sahara desert. The results of this investigation revealed a decrease in natural vegetation over the three time slices (1986, 1999 and 2007) which was characterised by an increase in high positive pixel values from 0.04 in 1986 to 0.22 and 0.32 in 1999 and 2007 respectively and, a decrease in natural vegetation from 74,411.60ha in 1986 to 28,591.93ha and 21,819.19ha in 1999 and 2007 respectively. The same results also revealed a periodic trend in which there was progressive increase in the cultivated area from 60,191.87ha in 1986 to 104,376.07ha in 1999 and a terminal decrease to 88,868.31ha in 2007. These findings point to expansion of vegetated and cultivated areas in in the initial period between 1988 and 1996 and reversal of these increases in the terminal period between 1988 and 1996. The study also revealed progressive expansion of built-up areas from 1, 681.68ha in 1986 to 2,661.82ha in 1999 and to 3,765.35ha in 2007. These results argue for the urgent need to protect and conserve the depleting natural vegetation by adopting sustainable human resource use practices i.e. intensive farming in order to minimize persistent depletion of natural vegetation.Keywords: changes, classification, desertification, vegetation changes
Procedia PDF Downloads 3881948 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint
Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar
Abstract:
Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine
Procedia PDF Downloads 841947 Solving Mean Field Problems: A Survey of Numerical Methods and Applications
Authors: Amal Machtalay
Abstract:
In this survey, we aim to review the rapidly growing literature on numerical methods to solve different forms of mean field problems, namely mean field games (MFG), mean field controls (MFC), potential MFGs, and master equations, as well as their corresponding recent applications. Here, we distinguish two families of numerical methods: iterative methods based on mesh generation and those called mesh-free, normally related to neural networking and learning frameworks.Keywords: mean-field games, numerical schemes, partial differential equations, complex systems, machine learning
Procedia PDF Downloads 1151946 Decoding Gender Disparities in AI: An Experimental Exploration Within the Realm of AI and Trust Building
Authors: Alexander Scott English, Yilin Ma, Xiaoying Liu
Abstract:
The widespread use of artificial intelligence in everyday life has triggered a fervent discussion covering a wide range of areas. However, to date, research on the influence of gender in various segments and factors from a social science perspective is still limited. This study aims to explore whether there are gender differences in human trust in AI for its application in basic everyday life and correlates with human perceived similarity, perceived emotions (including competence and warmth), and attractiveness. We conducted a study involving 321 participants using a two-subject experimental design with a two-factor (masculinized vs. feminized voice of the AI) multiplied by a two-factor (pitch level of the AI's voice) between-subject experimental design. Four contexts were created for the study and randomly assigned. The results of the study showed significant gender differences in perceived similarity, trust, and perceived emotion of the AIs, with females rating them significantly higher than males. Trust was higher in relation to AIs presenting the same gender (e.g., human female to female AI, human male to male AI). Mediation modeling tests indicated that emotion perception and similarity played a sufficiently mediating role in trust. Notably, although trust in AIs was strongly correlated with human gender, there was no significant effect on the gender of the AI. In addition, the study discusses the effects of subjects' age, job search experience, and job type on the findings.Keywords: artificial intelligence, gender differences, human-robot trust, mediation modeling
Procedia PDF Downloads 451945 Hydro-Geochemistry and Groundwater Quality Assessment of Rajshahi City in Bangladesh
Authors: M. G. Mostafa, S. M. Helal Uddin, A. B. M. H. Haque, M. R. Hasan
Abstract:
The study was carried out to understand the hydro-geochemistry and ground water quality in Rajshahi City of Bangladesh. 240 groundwater (shallow and deep tubewell) samples were collected during the year 2009-2010 covering pre-monsoon, monsoon and post-monsoon seasons and analyzed for various physico-chemical parameters including major ions. The results reveal that the groundwater was slightly acidic to neutral in nature, total hardness observed in all samples fall under hard to very hard category. The concentration of calcium, iron, manganese, arsenic and lead ions were found far above the permissible limit in most of the shallow tubewells water samples. The analysis results show that the mean concentrations of cations and anions were observed in the order: Ca > Mg > Na > K > Fe > Mn > Pb > Zn > Cu > As (total) > Cd and HCO3-> Cl-> SO42-> NO3-, respectively. The concentrations of TH, TDS, HCO3-, NO3-, Ca, Fe, Zn, Cu, Pb, and As (total) were found to be higher during post-monsoon compare to pre-monsoon, whilst K, Mg, Cd, and Cl were found higher during pre-monsoon and monsoon. Ca-HCO3 was identified as the major hydro chemical facie using piper trilinear diagram. Higher concentration of toxic metals including Fe, Mn, As and Pb were found indicating various health hazards. The results also illustrate that the rock water interaction was the major geochemical process controlling the chemistry of groundwater in the study area.Keywords: physio-chemical parameters, groundwater, geochemistry, Rajshahi city
Procedia PDF Downloads 3151944 Traditional Knowledge on Living Fences in Andean Linear Plantations
Authors: German Marino Rivera
Abstract:
Linear plantations are a common practice in several countries as living fences (LF) delimiting agroecosystems. They are composed of multipurpose perennial woods that provide assets, protection, and supply services. However, not much is known in some traditional communities like the Andean region, including the species composition and the social and ecological benefits of the species used. In the High Andean Colombian region, LF seems to be very typical and diverse. This study aimed to analyze the traditional knowledge about LF systems, including the species composition and their uses in rural communities of Alto Casanare, Colombia. Field measurements, interviews, guided tours, and species sampling were carried out in order to describe traditional practices and the species used in the LF systems. The use values were estimated through the Coefficient of Importance of the Species (CIS). A total of 26 farms engage in LF practices, covering an area of 9283.3 m. In these systems, 30 species were identified, belonging to 23 families. Alnus acuminata was the specie with the highest CIS. The species presented multipurpose uses for both economic and ecological purposes. The transmission of knowledge (TEK) about the used species is very heterogeneous among the farmers. Many species used were not documented, with reciprocal gaps between the literature and traditional species uses. Exchanging this information would increase the species' versatility, the socioeconomic aspects of these communities, increases the agrobiodiversity and ecological services provided by LF. The description of the TEK on LF provides a better understanding of the relationship of these communities with the natural resources, pointing out creative approaches to achieve local environment conservation in these agroecosystems and promoting socioeconomic development.Keywords: ethnobotany, living fences, traditional communities, agroecology
Procedia PDF Downloads 941943 The Classification Accuracy of Finance Data through Holder Functions
Authors: Yeliz Karaca, Carlo Cattani
Abstract:
This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).Keywords: artificial neural networks, finance data, Holder regularity, multifractals
Procedia PDF Downloads 2471942 Power Control of a Doubly-Fed Induction Generator Used in Wind Turbine by RST Controller
Authors: A. Boualouch, A. Frigui, T. Nasser, A. Essadki, A.Boukhriss
Abstract:
This work deals with the vector control of the active and reactive powers of a Double-Fed Induction generator DFIG used as a wind generator by the polynomial RST controller. The control of the statoric power transfer between the machine and the grid is achieved by acting on the rotor parameters and control is provided by the polynomial controller RST. The performance and robustness of the controller are compared with PI controller and evaluated by simulation results in MATLAB/simulink.Keywords: DFIG, RST, vector control, wind turbine
Procedia PDF Downloads 6591941 A Precursory Observation on Butterflies (Arthropoda, Insecta, Lepidoptera) of Umphang District, Tak Province, Western part of Thailand
Authors: Pisit Poolprasert, Auttpol Nakwa, Keerati Tanruean, Ezra Mongkolchaichana, Ezra Mongkolchaichana, Sinlapachai Senarat, Mark Tunmore
Abstract:
This present preliminary study aimed to observe the butterfly species diversity in two selected subdistricts i.e., Mae Klong and Umphang of Umphang district, Tak province, northern Thailand, during May to September 2018. A survey method using sweep net was employed along line transects between 10.00-12.00 a.m. and 13.00-15.00 p.m. A total of 337 butterflies representing 37 species, 26 genera in five families were encountered. The family Nymphalidae held the highest species richness (15 species), followed by Papilionidae (9 species) and Pieridae (6 speices), respectively. Herein, four uncommon species, namely Junonia iphita iphita, Tanaecia julii odilina, Penthema darlisa melema, and Papilio alcmenor alcmenor, were discovered in this time. The Shannon diversity (H’) for all samples obtained from the pooled data set of this observation valued 2.563 with relatively high values of Evenness (J’= 0.710) and Simpson index and (D = 0.829). For similarity index (Ss), the assemblage recorded of butterfly species between Mae Klang-Umphang shared about 0.629, implying that the environmental conditions in both surveyed zones were alike. Additionally, the accumulation curve in both locations of the butterfly was still increasing gradually when the collection ended, indicating that the lepidopteran species would be raised if we continue to survey more in next month. Nevertheless, to gain more butterfly taxa, observing different plant communities covering every season and using several survey techniques should be considered for further investigation.Keywords: butterfly, biodiversity, tak province, thailand
Procedia PDF Downloads 1151940 Machine Learning Based Digitalization of Validated Traditional Cognitive Tests and Their Integration to Multi-User Digital Support System for Alzheimer’s Patients
Authors: Ramazan Bakir, Gizem Kayar
Abstract:
It is known that Alzheimer and Dementia are the two most common types of Neurodegenerative diseases and their visibility is getting accelerated for the last couple of years. As the population sees older ages all over the world, researchers expect to see the rate of this acceleration much higher. However, unfortunately, there is no known pharmacological cure for both, although some help to reduce the rate of cognitive decline speed. This is why we encounter with non-pharmacological treatment and tracking methods more for the last five years. Many researchers, including well-known associations and hospitals, lean towards using non-pharmacological methods to support cognitive function and improve the patient’s life quality. As the dementia symptoms related to mind, learning, memory, speaking, problem-solving, social abilities and daily activities gradually worsen over the years, many researchers know that cognitive support should start from the very beginning of the symptoms in order to slow down the decline. At this point, life of a patient and caregiver can be improved with some daily activities and applications. These activities include but not limited to basic word puzzles, daily cleaning activities, taking notes. Later, these activities and their results should be observed carefully and it is only possible during patient/caregiver and M.D. in-person meetings in hospitals. These meetings can be quite time-consuming, exhausting and financially ineffective for hospitals, medical doctors, caregivers and especially for patients. On the other hand, digital support systems are showing positive results for all stakeholders of healthcare systems. This can be observed in countries that started Telemedicine systems. The biggest potential of our system is setting the inter-user communication up in the best possible way. In our project, we propose Machine Learning based digitalization of validated traditional cognitive tests (e.g. MOCA, Afazi, left-right hemisphere), their analyses for high-quality follow-up and communication systems for all stakeholders. R. Bakir and G. Kayar are with Gefeasoft, Inc, R&D – Software Development and Health Technologies company. Emails: ramazan, gizem @ gefeasoft.com This platform has a high potential not only for patient tracking but also for making all stakeholders feel safe through all stages. As the registered hospitals assign corresponding medical doctors to the system, these MDs are able to register their own patients and assign special tasks for each patient. With our integrated machine learning support, MDs are able to track the failure and success rates of each patient and also see general averages among similarly progressed patients. In addition, our platform also supports multi-player technology which helps patients play with their caregivers so that they feel much safer at any point they are uncomfortable. By also gamifying the daily household activities, the patients will be able to repeat their social tasks and we will provide non-pharmacological reminiscence therapy (RT – life review therapy). All collected data will be mined by our data scientists and analyzed meaningfully. In addition, we will also add gamification modules for caregivers based on Naomi Feil’s Validation Therapy. Both are behaving positively to the patient and keeping yourself mentally healthy is important for caregivers. We aim to provide a therapy system based on gamification for them, too. When this project accomplishes all the above-written tasks, patients will have the chance to do many tasks at home remotely and MDs will be able to follow them up very effectively. We propose a complete platform and the whole project is both time and cost-effective for supporting all stakeholders.Keywords: alzheimer’s, dementia, cognitive functionality, cognitive tests, serious games, machine learning, artificial intelligence, digitalization, non-pharmacological, data analysis, telemedicine, e-health, health-tech, gamification
Procedia PDF Downloads 1381939 Measuring the Extent of Equalization in Fiscal Transfers in India: An Index-Based Approach
Authors: Ragini Trehan, D.K. Srivastava
Abstract:
In the post-planning era, India’s fiscal transfers from the central to state governments are solely determined by the Finance Commissions (FCs). While in some of the well-established federations such as Australia, Canada, and Germany, equalization serves as the guiding principle of fiscal transfers and is constitutionally mandated, in India, it is not explicitly mandated, and FCs attempt to implement it indirectly by a combination of a formula-based share in the divisible pool of central taxes supplemented by a set of grants. In this context, it is important to measure the extent of equalization that is achieved through FC transfers with a view to improving the design of such transfers. This study uses an index-based methodology for measuring the degree of equalization achieved through FC-transfers covering the period from FC12 to the first year of FC15 spanning from 2005-06 to 2020-21. The ‘Index of Equalization’ shows that the extent of equalization has remained low in the range of 30% to 37% for the four Commission periods under review. The highest degree of equalization at 36.7% was witnessed in the FC12 period and the lowest equalization at 29.5% was achieved during the FC15(1) period. The equalizing efficiency of recommended transfers also shows a consistent fall from 11.4% in the FC12 period to 7.5% by the FC15 (1) period. Further, considering progressivity in fiscal transfers as a special case of equalizing transfers, this study shows that the scheme of per capita total transfers when determined using the equalization approach is more progressive and is characterized by minimal deviations as compared to the profile of transfers recommended by recent FCs.Keywords: fiscal transfers, index of equalization, equalizing efficiency, fiscal capacity, expenditure needs, finance Commission, tax effort
Procedia PDF Downloads 761938 Evaluating Surface Water Quality Using WQI, Trend Analysis, and Cluster Classification in Kebir Rhumel Basin, Algeria
Authors: Lazhar Belkhiri, Ammar Tiri, Lotfi Mouni, Fatma Elhadj Lakouas
Abstract:
This study evaluates the surface water quality in the Kebir Rhumel Basin by analyzing hydrochemical parameters. To assess spatial and temporal variations in water quality, we applied the Water Quality Index (WQI), Mann-Kendall (MK) trend analysis, and hierarchical cluster analysis (HCA). Monthly measurements of eleven hydrochemical parameters were collected across eight stations from January 2016 to December 2020. Calcium and sulfate emerged as the dominant cation and anion, respectively. WQI analysis indicated a high incidence of poor water quality at stations Ain Smara (AS), Beni Haroune (BH), Grarem (GR), and Sidi Khalifa (SK), where 89.5%, 90.6%, 78.2%, and 62.7% of samples, respectively, fell into this category. The MK trend analysis revealed a significant upward trend in WQI at Oued Boumerzoug (ON) and SK stations, signaling temporal deterioration in these areas. HCA grouped the dataset into three clusters, covering approximately 22%, 30%, and 48% of the months, respectively. Within these clusters, specific stations exhibited elevated WQI values: GR and ON in the first cluster, OB and SK in the second, and AS, BH, El Milia (EM), and Hammam Grouz (HG) in the third. Furthermore, approximately 38%, 41%, and 38% of samples in clusters one, two, and three, respectively, were classified as having poor water quality. These findings provide essential insights for policymakers in formulating strategies to restore and manage surface water quality in the region.Keywords: surface water quality, water quality index (WQI), Mann-Kendall Trend Analysis, hierarchical cluster analysis (HCA), spatial-temporal distribution, Kebir Rhumel Basin
Procedia PDF Downloads 191937 Using Support Vector Machines for Measuring Democracy
Authors: Tommy Krieger, Klaus Gruendler
Abstract:
We present a novel approach for measuring democracy, which enables a very detailed and sensitive index. This method is based on Support Vector Machines, a mathematical algorithm for pattern recognition. Our implementation evaluates 188 countries in the period between 1981 and 2011. The Support Vector Machines Democracy Index (SVMDI) is continuously on the 0-1-Interval and robust to variations in the numerical process parameters. The algorithm introduced here can be used for every concept of democracy without additional adjustments, and due to its flexibility it is also a valuable tool for comparison studies.Keywords: democracy, democracy index, machine learning, support vector machines
Procedia PDF Downloads 3801936 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference
Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev
Abstract:
Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.Keywords: compartmental model, climate, dengue, machine learning, social-economic
Procedia PDF Downloads 861935 Neural Networks Models for Measuring Hotel Users Satisfaction
Authors: Asma Ameur, Dhafer Malouche
Abstract:
Nowadays, user comments on the Internet have an important impact on hotel bookings. This confirms that the e-reputation issue can influence the likelihood of customer loyalty to a hotel. In this way, e-reputation has become a real differentiator between hotels. For this reason, we have a unique opportunity in the opinion mining field to analyze the comments. In fact, this field provides the possibility of extracting information related to the polarity of user reviews. This sentimental study (Opinion Mining) represents a new line of research for analyzing the unstructured textual data. Knowing the score of e-reputation helps the hotelier to better manage his marketing strategy. The score we then obtain is translated into the image of hotels to differentiate between them. Therefore, this present research highlights the importance of hotel satisfaction ‘scoring. To calculate the satisfaction score, the sentimental analysis can be manipulated by several techniques of machine learning. In fact, this study treats the extracted textual data by using the Artificial Neural Networks Approach (ANNs). In this context, we adopt the aforementioned technique to extract information from the comments available in the ‘Trip Advisor’ website. This actual paper details the description and the modeling of the ANNs approach for the scoring of online hotel reviews. In summary, the validation of this used method provides a significant model for hotel sentiment analysis. So, it provides the possibility to determine precisely the polarity of the hotel users reviews. The empirical results show that the ANNs are an accurate approach for sentiment analysis. The obtained results show also that this proposed approach serves to the dimensionality reduction for textual data’ clustering. Thus, this study provides researchers with a useful exploration of this technique. Finally, we outline guidelines for future research in the hotel e-reputation field as comparing the ANNs with other technique.Keywords: clustering, consumer behavior, data mining, e-reputation, machine learning, neural network, online hotel ‘reviews, opinion mining, scoring
Procedia PDF Downloads 1371934 Artificial Intelligence Created Inventions
Authors: John Goodhue, Xiaonan Wei
Abstract:
Current legal decisions and policies regarding the naming as artificial intelligence as inventor are reviewed with emphasis on the recent decisions by the European Patent Office regarding the DABUS inventions holding that an artificial intelligence machine cannot be an inventor. Next, a set of hypotheticals is introduced and examined to better understand how artificial intelligence might be used to create or assist in creating new inventions and how application of existing or proposed changes in the law would affect the ability to protect these inventions including due to restrictions on artificial intelligence for being named as inventors, ownership of inventions made by artificial intelligence, and the effects on legal standards for inventiveness or obviousness.Keywords: Artificial intelligence, innovation, invention, patent
Procedia PDF Downloads 1781933 Case Analysis of Bamboo Based Social Enterprises in India-Improving Profitability and Sustainability
Authors: Priyal Motwani
Abstract:
The current market for bamboo products in India is about Rs. 21000 crores and is highly unorganised and fragmented. In this study, we have closely analysed the structure and functions of a major bamboo craft based organisation in Kerela, India and elaborated about its value chain, product mix, pricing strategy and supply chain, collaborations and competitive landscape. We have identified six major bottlenecks that are prevalent in such organisations, based on the Indian context, in relation to their product mix, asset management, and supply chain- corresponding waste management and retail network. The study has identified that the target customers for the bamboo based products and alternative revenue streams (eco-tourism, microenterprises, training), by carrying out secondary and primary research (5000 sample space), that can boost the existing revenue by 150%. We have then recommended an optimum product mix-covering premium, medium and low valued processing, for medium sized bamboo based organisations, in accordance with their capacity to maximize their revenue potential. After studying such organisations and their counter parts, the study has established an optimum retail network, considering B2B, B2C physical and online retail, to maximize their sales to their target groups. On the basis of the results obtained from the analysis of the future and present trends, our study gives recommendations to improve the revenue potential of bamboo based organisation in India and promote sustainability.Keywords: bamboo, bottlenecks, optimization, product mix, retail network, value chain
Procedia PDF Downloads 2171932 Cantilever Secant Pile Constructed in Sand: Numerical Comparative Study and Design Aids – Part II
Authors: Khaled R. Khater
Abstract:
All civil engineering projects include excavation work and therefore need some retaining structures. Cantilever secant pile walls are an economical supporting system up to 5.0-m depths. The parameters controlling wall tip displacement are the focus of this paper. So, two analysis techniques have been investigated and arbitrated. They are the conventional method and finite element analysis. Accordingly, two computer programs have been used, Excel sheet and Plaxis-2D. Two soil models have been used throughout this study. They are Mohr-Coulomb soil model and Isotropic Hardening soil models. During this study, two soil densities have been considered, i.e. loose and dense sand. Ten wall rigidities have been analyzed covering ranges of perfectly flexible to completely rigid walls. Three excavation depths, i.e. 3.0-m, 4.0-m and 5.0-m were tested to cover the practical range of secant piles. This work submits beneficial hints about secant piles to assist designers and specification committees. Also, finite element analysis, isotropic hardening, is recommended to be the fair judge when two designs conflict. A rational procedure using empirical equations has been suggested to upgrade the conventional method to predict wall tip displacement ‘δ’. Also, a reasonable limitation of ‘δ’ as a function of excavation depth, ‘h’ has been suggested. Also, it has been found that, after a certain penetration depth any further increase of it does not positively affect the wall tip displacement, i.e. over design and uneconomic.Keywords: design aids, numerical analysis, secant pile, Wall tip displacement
Procedia PDF Downloads 1911931 Bullying with Neurodiverse Students and Education Policy Reform
Authors: Fharia Tilat Loba
Abstract:
Studies show that there is a certain group of students who are more vulnerable to bullying due to their physical appearance, disability, sexual preference, race, and lack of social and behavioral skills. Students with autism spectrum disorders (ASD) are one of the most vulnerable groups among these at-risk groups. Researchers suggest that focusing on vulnerable groups of students who can be the target of bullying helps to understand the causes and patterns of aggression, which ultimately helps in structuring intervention programs to reduce bullying. Since Australia ratified the United Nations Convention on the Rights of Persons with Disabilities in 2006, it has been committed to providing an inclusive, safe, and effective learning environment for all children. In addition, the 2005 Disability Standards for Education seeks to ensure that students with disabilities can access and participate in education on the same basis as other students, covering all aspects of education, including harassment and victimization. However, bullying hinders students’ ability to fully participate in schooling. The proposed study aims to synthesize the notions of traditional bullying and cyberbullying and attempts to understand the experiences of students with ASD who are experiencing bullying in their schools. The proposed study will primarily focus on identifying the gaps between policy and practice related to bullying, and it will also attempt to understand the experiences of parents of students with ASD and professionals who have experience dealing with bullying at the school level in Australia. This study is expected to contribute to the theoretical knowledge of the bullying phenomenon and provide a reference for advocacy at the school, organization, and government levels.Keywords: education policy, bullying, Australia, neurodiversity
Procedia PDF Downloads 581930 Multi-Agent System Based Solution for Operating Agile and Customizable Micro Manufacturing Systems
Authors: Dylan Santos De Pinho, Arnaud Gay De Combes, Matthieu Steuhlet, Claude Jeannerat, Nabil Ouerhani
Abstract:
The Industry 4.0 initiative has been launched to address huge challenges related to ever-smaller batch sizes. The end-user need for highly customized products requires highly adaptive production systems in order to keep the same efficiency of shop floors. Most of the classical Software solutions that operate the manufacturing processes in a shop floor are based on rigid Manufacturing Execution Systems (MES), which are not capable to adapt the production order on the fly depending on changing demands and or conditions. In this paper, we present a highly modular and flexible solution to orchestrate a set of production systems composed of a micro-milling machine-tool, a polishing station, a cleaning station, a part inspection station, and a rough material store. The different stations are installed according to a novel matrix configuration of a 3x3 vertical shelf. The different cells of the shelf are connected through horizontal and vertical rails on which a set of shuttles circulate to transport the machined parts from a station to another. Our software solution for orchestrating the tasks of each station is based on a Multi-Agent System. Each station and each shuttle is operated by an autonomous agent. All agents communicate with a central agent that holds all the information about the manufacturing order. The core innovation of this paper lies in the path planning of the different shuttles with two major objectives: 1) reduce the waiting time of stations and thus reduce the cycle time of the entire part, and 2) reduce the disturbances like vibration generated by the shuttles, which highly impacts the manufacturing process and thus the quality of the final part. Simulation results show that the cycle time of the parts is reduced by up to 50% compared with MES operated linear production lines while the disturbance is systematically avoided for the critical stations like the milling machine-tool.Keywords: multi-agent systems, micro-manufacturing, flexible manufacturing, transfer systems
Procedia PDF Downloads 1301929 Effect of the Workpiece Position on the Manufacturing Tolerances
Authors: Rahou Mohamed , Sebaa Fethi, Cheikh Abdelmadjid
Abstract:
Manufacturing tolerancing is intended to determine the intermediate geometrical and dimensional states of the part during its manufacturing process. These manufacturing dimensions also serve to satisfy not only the functional requirements given in the definition drawing but also the manufacturing constraints, for example geometrical defects of the machine, vibration, and the wear of the cutting tool. The choice of positioning has an important influence on the cost and quality of manufacture. To avoid this problem, a two-step approach have been developed. The first step is dedicated to the determination of the optimum position. As for the second step, a study was carried out for the tightening effect on the tolerance interval.Keywords: dispersion, tolerance, manufacturing, position
Procedia PDF Downloads 3391928 Optimizing the Use of Google Translate in Translation Teaching: A Case Study at Prince Sultan University
Authors: Saadia Elamin
Abstract:
The quasi-universal use of smart phones with internet connection available all the time makes it a reflex action for translation undergraduates, once they encounter the least translation problem, to turn to the freely available web resource: Google Translate. Like for other translator resources and aids, the use of Google Translate needs to be moderated in such a way that it contributes to developing translation competence. Here, instead of interfering with students’ learning by providing ready-made solutions which might not always fit into the contexts of use, it can help to consolidate the skills of analysis and transfer which students have already acquired. One way to do so is by training students to adhere to the basic principles of translation work. The most important of these is that analyzing the source text for comprehension comes first and foremost before jumping into the search for target language equivalents. Another basic principle is that certain translator aids and tools can be used for comprehension, while others are to be confined to the phase of re-expressing the meaning into the target language. The present paper reports on the experience of making a measured and reasonable use of Google Translate in translation teaching at Prince Sultan University (PSU), Riyadh. First, it traces the development that has taken place in the field of translation in this age of information technology, be it in translation teaching and translator training, or in the real-world practice of the profession. Second, it describes how, with the aim of reflecting this development onto the way translation is taught, senior students, after being trained on post-editing machine translation output, are authorized to use Google Translate in classwork and assignments. Third, the paper elaborates on the findings of this case study which has demonstrated that Google Translate, if used at the appropriate levels of training, can help to enhance students’ ability to perform different translation tasks. This help extends from the search for terms and expressions, to the tasks of drafting the target text, revising its content and finally editing it. In addition, using Google Translate in this way fosters a reflexive and critical attitude towards web resources in general, maximizing thus the benefit gained from them in preparing students to meet the requirements of the modern translation job market.Keywords: Google Translate, post-editing machine translation output, principles of translation work, translation competence, translation teaching, translator aids and tools
Procedia PDF Downloads 4761927 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2
Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk
Abstract:
Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.Keywords: ecosystem services, grassland management, machine learning, remote sensing
Procedia PDF Downloads 219