Search results for: human machine collaboration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11556

Search results for: human machine collaboration

10866 The Human Rights of Women in Brazilian Territory: A Literature Review of the Axes of the National Human Rights Program III

Authors: Ana Luiza Casasanta Garcia, Maria Del Carmen Cortizo

Abstract:

From the classic contractualist and early declarations of modern rights, discussions on policies for the protection and promotion of human rights were highlighted in an attempt to ensure the realization of human dignity and its values, which are (re) negotiated according to the needs evidenced in each historical and contextual moment. Aiming at guaranteeing human rights to Brazilian citizens, created in 2009 and updated in 2010, the Third National Human Rights Program (PNDH III) in force highlights guidelines and recommendations to guarantee human rights, among them, to guarantee the rights of women in Brazil. Based on this document, this article aims to locate historically and culturally the understanding of human rights related to the rights of women in Brazilian territory, from the analysis of the guiding axes of women's rights of the PNDH III. In methodological terms, the qualitative approach and documentary research were used to analyze the data according to the critical discourse analysis. As a result, it has been found that the process of building and maintaining the guarantee of women's human rights needs a reformulation that also shows a social revolution. This is justified by the fact that even with the provision in the PNDH III that, in order to guarantee the rights of women, it is necessary, for example, to adapt the Penal Code to the decriminalization of abortion and the professionalization of prostitution, these points are still very controversial and are not put into practice by the State. Finally, the importance of the critique of politics and the current system of production of understandings in favor of this social transformation is emphasized.

Keywords: human rights of women, social transformation, national human rights program III, public politics

Procedia PDF Downloads 118
10865 An Examination of the Challenges of Domestication of International Laws and Human Rights Laws in Nigeria

Authors: Uche A. Nnawulezi

Abstract:

This study evolved from the need to look at and evaluate the difficulties in the domestication of International Laws and Human Rights Laws in Nigeria. Essentially, the paper-based its examination on documentary evidence and depended much on secondary sources, for example, textbooks, journals, articles, periodicals and research reports emanating from suggestions of international law experts, jurists and human rights lawyers on the development challenges in domesticating international laws and human rights laws in Nigeria. These data were analyzed by the application of content analysis and careful observation of the current municipal laws which has posed great challenges in the domestication of International laws. This paper might follow the historical backdrop of the practices in the use of International law in Nigeria and should likewise consider the challenges inherent in these practices. The paper suggests that a sustainable domestication of International Laws and its application in Nigerian courts will ensure a better enforcement of human rights within the domestic jurisdiction.

Keywords: international law, human rights, domestication, challenges

Procedia PDF Downloads 229
10864 Anthropocentric and Ecocentric Representation of Human-Environment Relationship in Paulo Coelho's the Alchemist

Authors: Tooba Sabir, Namra Sabir, Mohammad Amjad Sabir

Abstract:

The human-environment relationship has been projected since the beginning of literary tradition i.e. pastoral tradition, however, the interest of critics, writers and poets, in this view, has been developed, since the last couple of decades because of the increasing scope of environmental studies and growing environmental issues. One such novel, that projects human-environment relationship, is ‘The Alchemist.’ It is Paulo Coelho’s one of the most read novels. It holds a central theme that the universe conspires to help a person achieve his destiny, projecting anthropocentrism and human domination by centralizing human and devaluing the intrinsic worth of ecosystem. However, ecocritical analysis of the text reveals that the novel contains, at several instances, ecocentrism as well e.g. ‘everything on earth is being continuously transformed because earth is alive.’ This portrays ecosphere as living and dynamic entity rather than a mere instrument for human to achieve his destiny. The idea that the universe shares the same language projects unity of nature showing the relationship between human and non-human aspects of the environment as one being and not separate or superior to one another. It depicts human as a part of the environment and not the lord of the world. Therefore, it can be concluded that the novel oscillates between both the ecocentric and the anthropocentric phenomena. It is not suggested, however, that one phenomenon should be valued over the other but that the complexities of both the phenomena should be recognized, acknowledged and valued in order to encourage the interactions between literature and environment.

Keywords: anthropocentrism, ecocentrism, ecocritical analysis, human-environment relationship

Procedia PDF Downloads 302
10863 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 195
10862 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 91
10861 Development of a Decision-Making Method by Using Machine Learning Algorithms in the Early Stage of School Building Design

Authors: Pegah Eshraghi, Zahra Sadat Zomorodian, Mohammad Tahsildoost

Abstract:

Over the past decade, energy consumption in educational buildings has steadily increased. The purpose of this research is to provide a method to quickly predict the energy consumption of buildings using separate evaluation of zones and decomposing the building to eliminate the complexity of geometry at the early design stage. To produce this framework, machine learning algorithms such as Support vector regression (SVR) and Artificial neural network (ANN) are used to predict energy consumption and thermal comfort metrics in a school as a case. The database consists of more than 55000 samples in three climates of Iran. Cross-validation evaluation and unseen data have been used for validation. In a specific label, cooling energy, it can be said the accuracy of prediction is at least 84% and 89% in SVR and ANN, respectively. The results show that the SVR performed much better than the ANN.

Keywords: early stage of design, energy, thermal comfort, validation, machine learning

Procedia PDF Downloads 73
10860 Value Addition of Quinoa (Chenopodium Quinoa Willd.) Using an Indigenously Developed Saponin Removal Machine

Authors: M.A. Ali, M. Matloob, A. Sahar, M. Yamin, M. Imran, Y.A. Yusof

Abstract:

Quinoa (Chenopodium quinoa Willd.) is known as pseudocereal was originated in South America's Andes. Quinoa is a good source of protein, amino acids, micronutrients and bioactive components. The lack of gluten makes it suitable for celiac patients. Saponins, the leading ant-nutrient, are found in the pericarp, which adheres to the seed and transmits the bitter flavor to the quinoa grain. It is found in varying amounts in quinoa from 0.1% to 5%. This study was planned to design an indigenous machine to remove saponin from quinoa grains at the farm level to promote entrepreneurship. The machine consisted of a feeding hopper, rotating shaft, grooved stone, perforated steel cylinder, V-belts, pulleys, electric motor and mild steel angle iron and sheets. The motor transmitted power to the shaft with a belt drive. The shaft on which the grooved stone was attached rotated inside the perforated cylinder having a clearance of 2 mm and was removed saponin by an abrasion mechanism. The saponin-removed quinoa was then dipped in water to determine the presence of saponin as it produced foam in water and data were statistically analyzed. The results showed that the raw seed feeding rate of 25 g/s and milling time of 135 s completely removed saponin from seeds with minimum grain losses of 2.85% as compared to the economic analysis of the machine showed that its break-even point was achieved after one and half months with 18,000 s and a production capacity of 33 g/s.

Keywords: quinoa seeds, saponin, abrasion mechanism, stone polishing, indigenous machine

Procedia PDF Downloads 61
10859 Parametric Study of Ball and Socket Joint for Bio-Mimicking Exoskeleton

Authors: Mukesh Roy, Basant Singh Sikarwar, Ravi Prakash, Priya Ranjan, Ayush Goyal

Abstract:

More than 11% of people suffer from weakness in the bone resulting in inability in walking or climbing stairs or from limited upper body and limb immobility. This motivates a fresh bio-mimicking solution to the design of an exo-skeleton to support human movement in the case of partial or total immobility either due to congenital or genetic factors or due to some accident or due to geratological factors. A deeper insight and detailed understanding is required into the workings of the ball and socket joints. Our research is to mimic ball and socket joints to design snugly fitting exoskeletons. Our objective is to design an exoskeleton which is comfortable and the presence of which is not felt if not in use. Towards this goal, a parametric study is conducted to provide detailed design parameters to fabricate an exoskeleton. This work builds up on real data of the design of the exoskeleton, so that the designed exo-skeleton will be able to provide required strength and support to the subject.

Keywords: bio-mimicking, exoskeleton, ball joint, socket joint, artificial limb, patient rehabilitation, joints, human-machine interface, wearable robotics

Procedia PDF Downloads 278
10858 The School Governing Council as the Impetus for Collaborative Education Governance: A Case Study of Two Benguet Municipalities in the Highlands of Northern Philippines

Authors: Maria Consuelo Doble

Abstract:

For decades, basic public education in the Philippines has been beleaguered by a governance scenario of multi-layered decision-making and the lack of collaboration between sectors in addressing issues on poor access to schools, high dropout rates, low survival rates, and poor student performance. These chronic problems persisted despite multiple efforts making it appear that the education system is incapable of reforming itself. In the mountainous rural towns of La Trinidad and Tuba, in the province of Benguet in Northern Philippines, collaborative education governance was catalyzed by the intervention of Synergeia Foundation, a coalition made up of individuals, institutions and organizations that aim to improve the quality of education in the Philippines. Its major thrust is to empower the major stakeholders at the community level to make education work by building the capacities of School Governing Councils (SGCs). Although mandated by the Department of Education in 2006, the SGCs in Philippine public elementary schools remained dysfunctional. After one year of capacity-building by Synergeia Foundation, some SGCs are already exhibiting active community-based multi-sectoral collaboration, while there are many that are not. With the myriad of factors hindering collaboration, Synergeia Foundation is now confronted with the pressing question: What are the factors that promote collaborative governance in the SGCs so that they can address the education-related issues that they are facing? Using Emerson’s (2011) framework on collaborative governance, this study analyzes the application of collaborative governance by highly-functioning SGCs in the public elementary schools of Tuba and La Trinidad. Findings of this action research indicate how the dynamics of collaboration composed of three interactive and iterative components – principled engagement, shared motivation and capacity for joint action – have resulted in meaningful short-term impact such as stakeholder engagement and decreased a number of dropouts. The change in the behavior of stakeholders is indicative of adaptation to a more collaborative approach in governing education in Benguet highland settings such as Tuba and La Trinidad.

Keywords: basic public education, Benguet highlands, collaborative governance, School Governing Council

Procedia PDF Downloads 279
10857 Use of Machine Learning in Data Quality Assessment

Authors: Bruno Pinto Vieira, Marco Antonio Calijorne Soares, Armando Sérgio de Aguiar Filho

Abstract:

Nowadays, a massive amount of information has been produced by different data sources, including mobile devices and transactional systems. In this scenario, concerns arise on how to maintain or establish data quality, which is now treated as a product to be defined, measured, analyzed, and improved to meet consumers' needs, which is the one who uses these data in decision making and companies strategies. Information that reaches low levels of quality can lead to issues that can consume time and money, such as missed business opportunities, inadequate decisions, and bad risk management actions. The step of selecting, identifying, evaluating, and selecting data sources with significant quality according to the need has become a costly task for users since the sources do not provide information about their quality. Traditional data quality control methods are based on user experience or business rules limiting performance and slowing down the process with less than desirable accuracy. Using advanced machine learning algorithms, it is possible to take advantage of computational resources to overcome challenges and add value to companies and users. In this study, machine learning is applied to data quality analysis on different datasets, seeking to compare the performance of the techniques according to the dimensions of quality assessment. As a result, we could create a ranking of approaches used, besides a system that is able to carry out automatically, data quality assessment.

Keywords: machine learning, data quality, quality dimension, quality assessment

Procedia PDF Downloads 138
10856 Performance Analysis of Traffic Classification with Machine Learning

Authors: Htay Htay Yi, Zin May Aye

Abstract:

Network security is role of the ICT environment because malicious users are continually growing that realm of education, business, and then related with ICT. The network security contravention is typically described and examined centrally based on a security event management system. The firewalls, Intrusion Detection System (IDS), and Intrusion Prevention System are becoming essential to monitor or prevent of potential violations, incidents attack, and imminent threats. In this system, the firewall rules are set only for where the system policies are needed. Dataset deployed in this system are derived from the testbed environment. The traffic as in DoS and PortScan traffics are applied in the testbed with firewall and IDS implementation. The network traffics are classified as normal or attacks in the existing testbed environment based on six machine learning classification methods applied in the system. It is required to be tested to get datasets and applied for DoS and PortScan. The dataset is based on CICIDS2017 and some features have been added. This system tested 26 features from the applied dataset. The system is to reduce false positive rates and to improve accuracy in the implemented testbed design. The system also proves good performance by selecting important features and comparing existing a dataset by machine learning classifiers.

Keywords: false negative rate, intrusion detection system, machine learning methods, performance

Procedia PDF Downloads 110
10855 System and Method for Providing Web-Based Remote Application Service

Authors: Shuen-Tai Wang, Yu-Ching Lin, Hsi-Ya Chang

Abstract:

With the development of virtualization technologies, a new type of service named cloud computing service is produced. Cloud users usually encounter the problem of how to use the virtualized platform easily over the web without requiring the plug-in or installation of special software. The object of this paper is to develop a system and a method enabling process interfacing within an automation scenario for accessing remote application by using the web browser. To meet this challenge, we have devised a web-based interface that system has allowed to shift the GUI application from the traditional local environment to the cloud platform, which is stored on the remote virtual machine. We designed the sketch of web interface following the cloud virtualization concept that sought to enable communication and collaboration among users. We describe the design requirements of remote application technology and present implementation details of the web application and its associated components. We conclude that this effort has the potential to provide an elastic and resilience environment for several application services. Users no longer have to burden the system maintenances and reduce the overall cost of software licenses and hardware. Moreover, this remote application service represents the next step to the mobile workplace, and it lets user to use the remote application virtually from anywhere.

Keywords: virtualization technology, virtualized platform, web interface, remote application

Procedia PDF Downloads 273
10854 Machine Learning Approach for Anomaly Detection in the Simulated Iec-60870-5-104 Traffic

Authors: Stepan Grebeniuk, Ersi Hodo, Henri Ruotsalainen, Paul Tavolato

Abstract:

Substation security plays an important role in the power delivery system. During the past years, there has been an increase in number of attacks on automation networks of the substations. In spite of that, there hasn’t been enough focus dedicated to the protection of such networks. Aiming to design a specialized anomaly detection system based on machine learning, in this paper we will discuss the IEC 60870-5-104 protocol that is used for communication between substation and control station and focus on the simulation of the substation traffic. Firstly, we will simulate the communication between substation slave and server. Secondly, we will compare the system's normal behavior and its behavior under the attack, in order to extract the right features which will be needed for building an anomaly detection system. Lastly, based on the features we will suggest the anomaly detection system for the asynchronous protocol IEC 60870-5-104.

Keywords: Anomaly detection, IEC-60870-5-104, Machine learning, Man-in-the-Middle attacks, Substation security

Procedia PDF Downloads 353
10853 Automated Manual Handling Risk Assessments: Practitioner Experienced Determinants of Automated Risk Analysis and Reporting Being a Benefit or Distraction

Authors: S. Cowley, M. Lawrance, D. Bick, R. McCord

Abstract:

Technology that automates manual handling (musculoskeletal disorder or MSD) risk assessments is increasingly available to ergonomists, engineers, generalist health and safety practitioners alike. The risk assessment process is generally based on the use of wearable motion sensors that capture information about worker movements for real-time or for posthoc analysis. Traditionally, MSD risk assessment is undertaken with the assistance of a checklist such as that from the SafeWork Australia code of practice, the expert assessor observing the task and ideally engaging with the worker in a discussion about the detail. Automation enables the non-expert to complete assessments and does not always require the assessor to be there. This clearly has cost and time benefits for the practitioner but is it an improvement on the assessment by the human. Human risk assessments draw on the knowledge and expertise of the assessor but, like all risk assessments, are highly subjective. The complexity of the checklists and models used in the process can be off-putting and sometimes will lead to the assessment becoming the focus and the end rather than a means to an end; the focus on risk control is lost. Automated risk assessment handles the complexity of the assessment for the assessor and delivers a simple risk score that enables decision-making regarding risk control. Being machine-based, they are objective and will deliver the same each time they assess an identical task. However, the WHS professional needs to know that this emergent technology asks the right questions and delivers the right answers. Whether it improves the risk assessment process and results or simply distances the professional from the task and the worker. They need clarity as to whether automation of manual task risk analysis and reporting leads to risk control or to a focus on the worker. Critically, they need evidence as to whether automation in this area of hazard management leads to better risk control or just a bigger collection of assessments. Practitioner experienced determinants of this automated manual task risk analysis and reporting being a benefit or distraction will address an understanding of emergent risk assessment technology, its use and things to consider when making decisions about adopting and applying these technologies.

Keywords: automated, manual-handling, risk-assessment, machine-based

Procedia PDF Downloads 110
10852 The Roles of Parental Involvement in the Teaching-Learning Process of Students with Special Needs: Perceptions of Special Needs Education Teachers

Authors: Chassel T. Paras, Tryxzy Q. Dela Cruz, Ma. Carmela Lousie V. Goingco, Pauline L. Tolentino, Carmela S. Dizon

Abstract:

In implementing inclusive education, parental involvement is measured to be an irreplaceable contributing factor. Parental involvement is described as an indispensable aspect of the teaching-learning process and has a remarkable effect on the student's academic performance. However, there are still differences in the viewpoints, expectations, and needs of both parents and teachers that are not yet fully conveyed in their relationship; hence, the perceptions of SNED teachers are essential in their collaboration with parents. This qualitative study explored how SNED teachers perceive the roles of parental involvement in the teaching-learning process of students with special needs. To answer this question, one-on-one face-to-face semi-structured interviews with three SNED teachers in a selected public school in Angeles City, Philippines, that offer special needs education services were conducted. The gathered data are then analyzed using Interpretative Phenomenological Analysis (IPA). The results revealed four superordinate themes, which include: (1) roles of parental involvement, (2) parental involvement opportunities, (3) barriers to parental involvement, and (4) parent-teacher collaboration practices. These results indicate that SNED teachers are aware of the roles and importance of parental involvement; however, despite parent-teacher collaboration, there are still barriers that impede parental involvement. Also, SNED teachers acknowledge the big roles of parents as they serve as main figures in the teaching-learning process of their children with special needs. Lastly, these results can be used as input in developing a school-facilitated parenting involvement framework that encompasses the contribution of SNED teachers in planning, developing, and evaluating parental involvement programs, which future researchers can also use in their studies

Keywords: parental involvement, special needs education, teaching-learning process, teachers’ perceptions, special needs education teachers, interpretative phenomenological analysis

Procedia PDF Downloads 93
10851 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 76
10850 Life Prediction Method of Lithium-Ion Battery Based on Grey Support Vector Machines

Authors: Xiaogang Li, Jieqiong Miao

Abstract:

As for the problem of the grey forecasting model prediction accuracy is low, an improved grey prediction model is put forward. Firstly, use trigonometric function transform the original data sequence in order to improve the smoothness of data , this model called SGM( smoothness of grey prediction model), then combine the improved grey model with support vector machine , and put forward the grey support vector machine model (SGM - SVM).Before the establishment of the model, we use trigonometric functions and accumulation generation operation preprocessing data in order to enhance the smoothness of the data and weaken the randomness of the data, then use support vector machine (SVM) to establish a prediction model for pre-processed data and select model parameters using genetic algorithms to obtain the optimum value of the global search. Finally, restore data through the "regressive generate" operation to get forecasting data. In order to prove that the SGM-SVM model is superior to other models, we select the battery life data from calce. The presented model is used to predict life of battery and the predicted result was compared with that of grey model and support vector machines.For a more intuitive comparison of the three models, this paper presents root mean square error of this three different models .The results show that the effect of grey support vector machine (SGM-SVM) to predict life is optimal, and the root mean square error is only 3.18%. Keywords: grey forecasting model, trigonometric function, support vector machine, genetic algorithms, root mean square error

Keywords: Grey prediction model, trigonometric functions, support vector machines, genetic algorithms, root mean square error

Procedia PDF Downloads 450
10849 Solving Single Machine Total Weighted Tardiness Problem Using Gaussian Process Regression

Authors: Wanatchapong Kongkaew

Abstract:

This paper proposes an application of probabilistic technique, namely Gaussian process regression, for estimating an optimal sequence of the single machine with total weighted tardiness (SMTWT) scheduling problem. In this work, the Gaussian process regression (GPR) model is utilized to predict an optimal sequence of the SMTWT problem, and its solution is improved by using an iterated local search based on simulated annealing scheme, called GPRISA algorithm. The results show that the proposed GPRISA method achieves a very good performance and a reasonable trade-off between solution quality and time consumption. Moreover, in the comparison of deviation from the best-known solution, the proposed mechanism noticeably outperforms the recently existing approaches.

Keywords: Gaussian process regression, iterated local search, simulated annealing, single machine total weighted tardiness

Procedia PDF Downloads 300
10848 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits

Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.

Abstract:

With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.

Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme

Procedia PDF Downloads 113
10847 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems

Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano

Abstract:

The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.

Keywords: environmental internet of things, EIoT, machine learning, anomaly detection, environment monitoring

Procedia PDF Downloads 141
10846 Dissolved Oxygen Prediction Using Support Vector Machine

Authors: Sorayya Malek, Mogeeb Mosleh, Sharifah M. Syed

Abstract:

In this study, Support Vector Machine (SVM) technique was applied to predict the dichotomized value of Dissolved oxygen (DO) from two freshwater lakes namely Chini and Bera Lake (Malaysia). Data sample contained 11 parameters for water quality features from year 2005 until 2009. All data parameters were used to predicate the dissolved oxygen concentration which was dichotomized into 3 different levels (High, Medium, and Low). The input parameters were ranked, and forward selection method was applied to determine the optimum parameters that yield the lowest errors, and highest accuracy. Initial results showed that pH, water temperature, and conductivity are the most important parameters that significantly affect the predication of DO. Then, SVM model was applied using the Anova kernel with those parameters yielded 74% accuracy rate. We concluded that using SVM models to predicate the DO is feasible, and using dichotomized value of DO yields higher prediction accuracy than using precise DO value.

Keywords: dissolved oxygen, water quality, predication DO, support vector machine

Procedia PDF Downloads 278
10845 Corporate Social Responsibility: An Ethical or a Legal Framework?

Authors: Pouira Askary

Abstract:

Indeed, in our globalized world which is facing with various international crises, the transnational corporations and other business enterprises have the capacity to foster economic well-being, development, technological improvement and wealth, as well as causing adverse impacts on human rights. The UN Human Rights Council declared that although the primary responsibility to protect human rights lie with the State but the transnational corporations and other business enterprises have also a responsibility to respect and protect human rights in the framework of corporate social responsibility. In 2011, the Human Rights Council endorsed the Guiding Principles on Business and Human Rights, a set of guidelines that define the key duties and responsibilities of States and business enterprises with regard to business-related human rights abuses. In UN’s view, the Guiding Principles do not create new legal obligations but constitute a clarification of the implications of existing standards, including under international human rights law. In 2014 the UN Human Rights Council decided to establish a working group on transnational corporations and other business enterprises whose mandate shall be to elaborate an international legally binding instrument to regulate, in international human rights law, the activities of transnational corporations and other business enterprises. Extremely difficult task for the working group to codify a legally binding document to regulate the behavior of corporations on the basis of the norms of international law! Concentration of this paper is on the origins of those human rights applicable on business enterprises. The research will discuss that the social and ethical roots of the CSR are much more institutionalized and elaborated than the legal roots. Therefore, the first step is to determine whether and to what extent corporations, do have an ethical responsibility to respect human rights and if so, by which means this ethical and social responsibility is convertible to legal commitments.

Keywords: CSR, ethics, international law, human rights, development, sustainable business

Procedia PDF Downloads 366
10844 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 80
10843 Structural Reliability Analysis Using Extreme Learning Machine

Authors: Mehul Srivastava, Sharma Tushar Ravikant, Mridul Krishn Mishra

Abstract:

In structural design, the evaluation of safety and probability failure of structure is of significant importance, mainly when the variables are random. On real structures, structural reliability can be evaluated obtaining an implicit limit state function. The structural reliability limit state function is obtained depending upon the statistically independent variables. In the analysis of reliability, we considered the statistically independent random variables to be the load intensity applied and the depth or height of the beam member considered. There are many approaches for structural reliability problems. In this paper Extreme Learning Machine technique and First Order Second Moment Method is used to determine the reliability indices for the same set of variables. The reliability index obtained using ELM is compared with the reliability index obtained using FOSM. Higher the reliability index, more feasible is the method to determine the reliability.

Keywords: reliability, reliability index, statistically independent, extreme learning machine

Procedia PDF Downloads 670
10842 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia PDF Downloads 127
10841 Enhancing Word Meaning Retrieval Using FastText and Natural Language Processing Techniques

Authors: Sankalp Devanand, Prateek Agasimani, Shamith V. S., Rohith Neeraje

Abstract:

Machine translation has witnessed significant advancements in recent years, but the translation of languages with distinct linguistic characteristics, such as English and Sanskrit, remains a challenging task. This research presents the development of a dedicated English-to-Sanskrit machine translation model, aiming to bridge the linguistic and cultural gap between these two languages. Using a variety of natural language processing (NLP) approaches, including FastText embeddings, this research proposes a thorough method to improve word meaning retrieval. Data preparation, part-of-speech tagging, dictionary searches, and transliteration are all included in the methodology. The study also addresses the implementation of an interpreter pattern and uses a word similarity task to assess the quality of word embeddings. The experimental outcomes show how the suggested approach may be used to enhance word meaning retrieval tasks with greater efficacy, accuracy, and adaptability. Evaluation of the model's performance is conducted through rigorous testing, comparing its output against existing machine translation systems. The assessment includes quantitative metrics such as BLEU scores, METEOR scores, Jaccard Similarity, etc.

Keywords: machine translation, English to Sanskrit, natural language processing, word meaning retrieval, fastText embeddings

Procedia PDF Downloads 30
10840 Economic Community of West African States Court of Justice and the Development of Human Rights Jurisprudence in Africa: A Difficult Take-off with a Bright and Visionary Landing

Authors: Timothy Fwa Yerima

Abstract:

This paper evaluates the development of human rights jurisprudence in Africa by the ECOWAS Court of Justice. It traces that though ECOWAS was not established with the aim of promoting and protecting human rights as the African Court of Human and Peoples’ Rights, no doubt, the 1991 ECOWAS Court Protocol and the 1993 ECOWAS Revised Treaty give the ECOWAS Court its human rights mandate. The paper, however, points out that despite the availability of these two Laws, the ECOWAS Court had difficulty in its human rights mandate, in view of the twin problems of lack of access to the Court by private parties and personal jurisdiction of the Court to entertain cases filed by private parties. The paper considers the 2005 Supplementary Protocol, not only as an effective legal framework in West African Sub-Region that tackles these problems in human rights cases but also a strong foundation upon which the Court has been developing human rights jurisprudence in Africa through the interpretation and application of this Law and other sources of Law of the Court. After a thorough analysis of some principles laid down by the ECOWAS Court so far, the paper observes that human rights jurisprudence in Africa is growing rapidly; depicting that though the ECOWAS Court initially had difficulty in its human rights mandate, today it has a bright and visionary landing. The paper concludes that West African Sub-Region will witness a more effective performance of the ECOWAS Court if some of its challenges are tackled.

Keywords: access, African human rights, ECOWAS court of justice, jurisprudence, personal jurisdiction

Procedia PDF Downloads 338
10839 Semantic Differences between Bug Labeling of Different Repositories via Machine Learning

Authors: Pooja Khanal, Huaming Zhang

Abstract:

Labeling of issues/bugs, also known as bug classification, plays a vital role in software engineering. Some known labels/classes of bugs are 'User Interface', 'Security', and 'API'. Most of the time, when a reporter reports a bug, they try to assign some predefined label to it. Those issues are reported for a project, and each project is a repository in GitHub/GitLab, which contains multiple issues. There are many software project repositories -ranging from individual projects to commercial projects. The labels assigned for different repositories may be dependent on various factors like human instinct, generalization of labels, label assignment policy followed by the reporter, etc. While the reporter of the issue may instinctively give that issue a label, another person reporting the same issue may label it differently. This way, it is not known mathematically if a label in one repository is similar or different to the label in another repository. Hence, the primary goal of this research is to find the semantic differences between bug labeling of different repositories via machine learning. Independent optimal classifiers for individual repositories are built first using the text features from the reported issues. The optimal classifiers may include a combination of multiple classifiers stacked together. Then, those classifiers are used to cross-test other repositories which leads the result to be deduced mathematically. The produce of this ongoing research includes a formalized open-source GitHub issues database that is used to deduce the similarity of the labels pertaining to the different repositories.

Keywords: bug classification, bug labels, GitHub issues, semantic differences

Procedia PDF Downloads 186
10838 Humans as Enrichment: Human-Animal Interactions and the Perceived Benefit to the Cheetah (Acinonyx jubatus), Human and Zoological Establishment

Authors: S. J. Higgs, E. Van Eck, K. Heynis, S. H. Broadberry

Abstract:

Engagement with non-human animals is a rapidly-growing field of study within the animal science and social science sectors, with human-interactions occurring in many forms; interactions, encounters and animal-assisted therapy. To our knowledge, there has been a wide array of research published on domestic and livestock human-animal interactions, however, there appear to be fewer publications relating to zoo animals and the effect these interactions have on the animal, human and establishment. The aim of this study was to identify if there were any perceivable benefits from the human-animal interaction for the cheetah, the human and the establishment. Behaviour data were collected before, during and after the interaction on the behaviour of the cheetah and the human participants to highlight any trends with nine interactions conducted. All 35 participants were asked to fill in a questionnaire prior to the interaction and immediately after to ascertain if their perceptions changed following an interaction with the cheetah. An online questionnaire was also distributed for three months to gain an understanding of the perceptions of human-animal interactions from members of the public, gaining 229 responses. Both questionnaires contained qualitative and quantitative questions to allow for specific definitive answers to be analysed, but also expansion on the participants perceived perception of human-animal interactions. In conclusion, it was found that participants’ perceptions of human-animal interactions saw a positive change, with 64% of participants altering their opinion and viewing the interaction as beneficial for the cheetah (reduction in stress assumed behaviours) following participation in a 15-minute interaction. However, it was noted that many participants felt the interaction lacked educational values and therefore this is an area in which zoological establishments can work to further improve upon. The results highlighted many positive benefits for the human, animal and establishment, however, the study does indicate further areas for research in order to promote positive perceptions of human-animal interactions and to further increase the welfare of the animal during these interactions, with recommendations to create and regulate legislation.

Keywords: Acinonyx jubatus, encounters, human-animal interactions, perceptions, zoological establishments

Procedia PDF Downloads 176
10837 Enhancing Human Security Through Conmprehensive Counter-terrorism Measures

Authors: Alhaji Khuzaima Mohammed Osman, Zaeem Sheikh Abdul Wadudi Haruna

Abstract:

This article aims to explore the crucial link between counter-terrorism efforts and the preservation of human security. As acts of terrorism continue to pose significant threats to societies worldwide, it is imperative to develop effective strategies that mitigate risks while safeguarding the rights and well-being of individuals. This paper discusses key aspects of counter-terrorism and human security, emphasizing the need for a comprehensive approach that integrates intelligence, prevention, response, and resilience-building measures. By highlighting successful case studies and lessons learned, this article provides valuable insights for policymakers, law enforcement agencies, and practitioners in their quest to address terrorism and foster human security.

Keywords: human security, risk mitigation, terrorist activities, civil liberties

Procedia PDF Downloads 73