Search results for: cointegration approach in panel data
33632 Effectiveness of Blended Learning in Public School During Covid-19: A Way Forward
Authors: Sumaira Taj
Abstract:
Blended learning is emerged as a prerequisite approach for teaching in all schools after the outbreak of the COVID-19 pandemic. However, how much public elementary and secondary schools in Pakistan are ready for adapting this approach and what should be done to prepare schools and students for blended learning are the questions that this paper attempts to answer. Mixed-method research methodology was used to collect data from 40 teachers, 500 students, and 10 mothers. Descriptive statistics was used to analyze quantitative data. As for as readiness is concerned, schools lack resources for blended/ virtual/ online classes from infra-structure to skills, parents’ literacy level hindered students’ learning process and teachers’ skills presented challenges in a smooth and swift shift of the schools from face-to-face learning to blended learning. It is recommended to establish a conducive environment in schools by providing all required resources and skills. Special trainings should be organized for low literacy level parents. Multiple ways should be adopted to benefit all students.Keywords: blended learning, challenges in online classes, education in covid-19, public schools in pakistan
Procedia PDF Downloads 16733631 Improving Patient-Care Services at an Oncology Center with a Flexible Adaptive Scheduling Procedure
Authors: P. Hooshangitabrizi, I. Contreras, N. Bhuiyan
Abstract:
This work presents an online scheduling problem which accommodates multiple requests of patients for chemotherapy treatments in a cancer center of a major metropolitan hospital in Canada. To solve the problem, an adaptive flexible approach is proposed which systematically combines two optimization models. The first model is intended to dynamically schedule arriving requests in the form of waiting lists whereas the second model is used to reschedule the already booked patients with the goal of finding better resource allocations when new information becomes available. Both models are created as mixed integer programming formulations. Various controllable and flexible parameters such as deviating the prescribed target dates by a pre-determined threshold, changing the start time of already booked appointments and the maximum number of appointments to move in the schedule are included in the proposed approach to have sufficient degrees of flexibility in handling arrival requests and unexpected changes. Several computational experiments are conducted to evaluate the performance of the proposed approach using historical data provided by the oncology clinic. Our approach achieves outstandingly better results as compared to those of the scheduling system being used in practice. Moreover, several analyses are conducted to evaluate the effect of considering different levels of flexibility on the obtained results and to assess the performance of the proposed approach in dealing with last-minute changes. We strongly believe that the proposed flexible adaptive approach is very well-suited for implementation at the clinic to provide better patient-care services and to utilize available resource more efficiently.Keywords: chemotherapy scheduling, multi-appointment modeling, optimization of resources, satisfaction of patients, mixed integer programming
Procedia PDF Downloads 17033630 Privacy Rights of Children in the Social Media Sphere: The Benefits and Challenges Under the EU and US Legislative Framework
Authors: Anna Citterbergova
Abstract:
This study explores the safeguards and guarantees to children’s personal data protection under the current EU and US legislative framework, namely the GDPR (2018) and COPPA (2000). Considering that children are online for the majority of their free time, one cannot overlook the negative side effects that may be associated with online participation, which may put children’s wellbeing and their fundamental rights at risk. The question of whether the current relevant legislative framework in relation to the responsibilities of the internet service providers (ISPs) are adequate safeguards and guarantees to children’s personal data protection has been an evolving debate both in the US and in the EU. From a children’s rights perspective, processors of personal data have certain obligations that must meet the international human rights principles (e. g. the CRC, ECHR), which require taking into account the best interest of the child. Accordingly, the need to protect children’s privacy online remains strong and relevant with the expansion of the number and importance of social media platforms to human life. At the same time, the landscape of the internet is rapidly evolving, and commercial interests are taking a more targeted approach in seeking children’s data. Therefore, it is essential to constantly evaluate the ongoing and evolving newly adopted market policies of ISPs that may misuse the gap in the current letter of the law. Previous studies in the field have already pointed out that both GDPR and COPPA may theoretically not be sufficient in protecting children’s personal data. With the focus on social media platforms, this study uses the doctrinal-descriptive method to identifiy the mechanisms enshrined in the GDPR and COPPA designed to protect children’s personal data. In its second part, the study includes a data gathering phase by the national data protection authorities responsible for monitoring and supervision of the GDPR in relation to children’s personal data protection who monitor the enforcement of the data protection rules throughout the European Union an contribute to their consistent application. These gathered primary source of data will later be used to outline the series of benefits and challenges to children’s persona lata protection faced by these institutes and the analysis that aims to suggest if and/or how to hold ISPs accountable while striking a fair balance between the commercial rights and the right to protection of the personal data of children. The preliminary results can be divided into two categories. First, conclusions in the doctrinal-descriptive part of the study. Second, specific cases and situations from the practice of national data protection authorities. While for the first part, concrete conclusions can already be presented, the second part is currently still in the data gathering phase. The result of this research is a comprehensive analysis on the safeguards and guarantees to children’s personal data protection under the current EU and US legislative framework, based on doctrinal-descriptive approach and original empirical data.Keywords: personal data of children, personal data protection, GDPR, COPPA, ISPs, social media
Procedia PDF Downloads 9733629 Monomial Form Approach to Rectangular Surface Modeling
Authors: Taweechai Nuntawisuttiwong, Natasha Dejdumrong
Abstract:
Geometric modeling plays an important role in the constructions and manufacturing of curve, surface and solid modeling. Their algorithms are critically important not only in the automobile, ship and aircraft manufacturing business, but are also absolutely necessary in a wide variety of modern applications, e.g., robotics, optimization, computer vision, data analytics and visualization. The calculation and display of geometric objects can be accomplished by these six techniques: Polynomial basis, Recursive, Iterative, Coefficient matrix, Polar form approach and Pyramidal algorithms. In this research, the coefficient matrix (simply called monomial form approach) will be used to model polynomial rectangular patches, i.e., Said-Ball, Wang-Ball, DP, Dejdumrong and NB1 surfaces. Some examples of the monomial forms for these surface modeling are illustrated in many aspects, e.g., construction, derivatives, model transformation, degree elevation and degress reduction.Keywords: monomial forms, rectangular surfaces, CAGD curves, monomial matrix applications
Procedia PDF Downloads 14633628 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-time
Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl
Abstract:
In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method Web-App auto-generated twin data structure replication. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi" has been developed. A special login form has been developed with a special instance of data validation; this verification process secures the web application from its early stages. The system has been tested and validated, up to 99% of SQLi attacks have been prevented.Keywords: SQL injection, attacks, web application, accuracy, database
Procedia PDF Downloads 15333627 Solar Panel Design Aspects and Challenges for a Lunar Mission
Authors: Mannika Garg, N. Srinivas Murthy, Sunish Nair
Abstract:
TeamIndus is only Indian team participated in the Google Lunar X Prize (GLXP). GLXP is an incentive prize space competition which is organized by the XPrize Foundation and sponsored by Google. The main objective of the mission is to soft land a rover on the moon surface, travel minimum displacement of 500 meters and transmit HD and NRT videos and images to the Earth. Team Indus is designing a Lunar Lander which carries Rover with it and deliver onto the surface of the moon with a soft landing. For lander to survive throughout the mission, energy is required to operate all attitude control sensors, actuators, heaters and other necessary components. Photovoltaic solar array systems are the most common and primary source of power generation for any spacecraft. The scope of this paper is to provide a system-level approach for designing the solar array systems of the lander to generate required power to accomplish the mission. For this mission, the direction of design effort is to higher efficiency, high reliability and high specific power. Towards this approach, highly efficient multi-junction cells have been considered. The design is influenced by other constraints also like; mission profile, chosen spacecraft attitude, overall lander configuration, cost effectiveness and sizing requirements. This paper also addresses the various solar array design challenges such as operating temperature, shadowing, radiation environment and mission life and strategy of supporting required power levels (peak and average). The challenge to generate sufficient power at the time of surface touchdown, due to low sun elevation (El) and azimuth (Az) angle which depends on Lunar landing site, has also been showcased in this paper. To achieve this goal, energy balance analysis has been carried out to study the impact of the above-mentioned factors and to meet the requirements and has been discussed in this paper.Keywords: energy balance analysis, multi junction solar cells, photovoltaic, reliability, spacecraft attitude
Procedia PDF Downloads 23033626 Transdisciplinary Pedagogy: An Arts-Integrated Approach to Promote Authentic Science, Technology, Engineering, Arts, and Mathematics Education in Initial Teacher Education
Authors: Anne Marie Morrin
Abstract:
This paper will focus on the design, delivery and assessment of a transdisciplinary STEAM (Science, Technology, Engineering, Arts, and Mathematics) education initiative in a college of education in Ireland. The project explores a transdisciplinary approach to supporting STEAM education where the concepts, methodologies and assessments employed derive from visual art sessions within initial teacher education. The research will demonstrate that the STEAM Education approach is effective when visual art concepts and methods are placed at the core of the teaching and learning experience. Within this study, emphasis is placed on authentic collaboration and transdisciplinary pedagogical approaches with the STEAM subjects. The partners included a combination of teaching expertise in STEM and Visual Arts education, artists, in-service and pre-service teachers and children. The inclusion of all stakeholders mentioned moves towards a more authentic approach where transdisciplinary practice is at the core of the teaching and learning. Qualitative data was collected using a combination of questionnaires (focused and open-ended questions) and focus groups. In addition, the data was collected through video diaries where students reflected on their visual journals and transdisciplinary practice, which gave rich insight into participants' experiences and opinions on their learning. It was found that an effective program of STEAM education integration was informed by co-teaching (continuous professional development), which involved a commitment to adaptable and flexible approaches to teaching, learning, and assessment, as well as the importance of continuous reflection-in-action by all participants. The delivery of a transdisciplinary model of STEAM education was devised to reconceptualizatise how individual subject areas can develop essential skills and tackle critical issues (such as self-care and climate change) through data visualisation and technology. The success of the project can be attributed to the collaboration, which was inclusive, flexible and a willingness between various stakeholders to be involved in the design and implementation of the project from conception to completion. The case study approach taken is particularistic (focusing on the STEAM-ED project), descriptive (providing in-depth descriptions from varied and multiple perspectives), and heuristic (interpreting the participants’ experiences and what meaning they attributed to their experiences).Keywords: collaboration, transdisciplinary, STEAM, visual arts education
Procedia PDF Downloads 4933625 50+ Customers' Behavior in the Financial Market of the Czech Republic
Authors: K. Matušínská, H. Starzyczná, M. Stoklasa
Abstract:
The paper deals with behaviour of the segment 50+ in the financial market in the Czech Republic. This segment could be said as the strong market power and it can be a crucial business potential for financial business units. The main defined objective of this paper is analysis of the customers´ behaviour of the segment 50-60 years in the financial market in the Czech Republic and proposal making of the suitable marketing approach to satisfy their demands in the area of product, price, distribution and marketing communication policy. This paper is based on data from one part of primary marketing research. Paper determinates the basic problem areas as well as definition of financial services marketing, defining the primary research problem, hypothesis and primary research methodology. Finally suitable marketing approach to selected sub-segment at age of 50-60 years is proposed according to marketing research findings.Keywords: population aging in the Czech Republic, segment 50-60 years, financial services marketing, marketing research, marketing approach
Procedia PDF Downloads 38433624 Implementation of an IoT Sensor Data Collection and Analysis Library
Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee
Abstract:
Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data
Procedia PDF Downloads 37933623 Government (Big) Data Ecosystem: Definition, Classification of Actors, and Their Roles
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Organizations, including governments, generate (big) data that are high in volume, velocity, veracity, and come from a variety of sources. Public Administrations are using (big) data, implementing base registries, and enforcing data sharing within the entire government to deliver (big) data related integrated services, provision of insights to users, and for good governance. Government (Big) data ecosystem actors represent distinct entities that provide data, consume data, manipulate data to offer paid services, and extend data services like data storage, hosting services to other actors. In this research work, we perform a systematic literature review. The key objectives of this paper are to propose a robust definition of government (big) data ecosystem and a classification of government (big) data ecosystem actors and their roles. We showcase a graphical view of actors, roles, and their relationship in the government (big) data ecosystem. We also discuss our research findings. We did not find too much published research articles about the government (big) data ecosystem, including its definition and classification of actors and their roles. Therefore, we lent ideas for the government (big) data ecosystem from numerous areas that include scientific research data, humanitarian data, open government data, industry data, in the literature.Keywords: big data, big data ecosystem, classification of big data actors, big data actors roles, definition of government (big) data ecosystem, data-driven government, eGovernment, gaps in data ecosystems, government (big) data, public administration, systematic literature review
Procedia PDF Downloads 16333622 Spillover Effect of Husbands' Lifestyle on Their Wives' Marital Satisfaction in China
Authors: Xitong Liu, Yutong Huang, Shu-Ching Yang
Abstract:
The phenomena of hypergamous and hypogamous marriages have become popular due to the imbalanced sex ratio caused by Chinese social preference for sons. Our research explores the spillover effect of husbands' lifestyles on their wives' marital satisfaction in China. Both personal and spouse lifestyle elements are utilized to develop regression models to study husbands' spillover effects on women's marital satisfaction. With data from China Family Panel Study and Stata for analysis, we tested our hypothesis that both smoking and substance use by a spouse will negatively impact women's marital satisfaction. Our empirical findings suggest that substance use has negative implications on marriage satisfaction. In particular, husbands' substance use is more critical to wives' marriage satisfaction than wives' behaviours. Conversely, another behavior indicating bad habits, the number of times the spouse drank alcohol, had no significant effect on the wife's marital satisfaction. We concluded our investigation and provided future implications for scholars in the family economics field.Keywords: Asian/Pacific Islander families, family economics, housework/division of labor, spillover
Procedia PDF Downloads 12533621 Reverse Logistics Information Management Using Ontological Approach
Authors: F. Lhafiane, A. Elbyed, M. Bouchoum
Abstract:
Reverse Logistics (RL) Process is considered as complex and dynamic network that involves many stakeholders such as: suppliers, manufactures, warehouse, retails, and costumers, this complexity is inherent in such process due to lack of perfect knowledge or conflicting information. Ontologies, on the other hand, can be considered as an approach to overcome the problem of sharing knowledge and communication among the various reverse logistics partners. In this paper, we propose a semantic representation based on hybrid architecture for building the Ontologies in an ascendant way, this method facilitates the semantic reconciliation between the heterogeneous information systems (ICT) that support reverse logistics Processes and product data.Keywords: Reverse Logistics, information management, heterogeneity, ontologies, semantic web
Procedia PDF Downloads 49233620 New Hybrid Method to Model Extreme Rainfalls
Authors: Youness Laaroussi, Zine Elabidine Guennoun, Amine Amar
Abstract:
Modeling and forecasting dynamics of rainfall occurrences constitute one of the major topics, which have been largely treated by statisticians, hydrologists, climatologists and many other groups of scientists. In the same issue, we propose in the present paper a new hybrid method, which combines Extreme Values and fractal theories. We illustrate the use of our methodology for transformed Emberger Index series, constructed basing on data recorded in Oujda (Morocco). The index is treated at first by Peaks Over Threshold (POT) approach, to identify excess observations over an optimal threshold u. In the second step, we consider the resulting excess as a fractal object included in one dimensional space of time. We identify fractal dimension by the box counting. We discuss the prospect descriptions of rainfall data sets under Generalized Pareto Distribution, assured by Extreme Values Theory (EVT). We show that, despite of the appropriateness of return periods given by POT approach, the introduction of fractal dimension provides accurate interpretation results, which can ameliorate apprehension of rainfall occurrences.Keywords: extreme values theory, fractals dimensions, peaks Over threshold, rainfall occurrences
Procedia PDF Downloads 36233619 An Integrated Approach for Risk Management of Transportation of HAZMAT: Use of Quality Function Deployment and Risk Assessment
Authors: Guldana Zhigerbayeva, Ming Yang
Abstract:
Transportation of hazardous materials (HAZMAT) is inevitable in the process industries. The statistics show a significant number of accidents has occurred during the transportation of HAZMAT. This makes risk management of HAZMAT transportation an important topic. The tree-based methods including fault-trees, event-trees and cause-consequence analysis, and Bayesian network, have been applied to risk management of HAZMAT transportation. However, there is limited work on the development of a systematic approach. The existing approaches fail to build up the linkages between the regulatory requirements and the safety measures development. The analysis of historical data from the past accidents’ report databases would limit our focus on the specific incidents and their specific causes. Thus, we may overlook some essential elements in risk management, including regulatory compliance, field expert opinions, and suggestions. A systematic approach is needed to translate the regulatory requirements of HAZMAT transportation into specified safety measures (both technical and administrative) to support the risk management process. This study aims to first adapt the House of Quality (HoQ) to House of Safety (HoS) and proposes a new approach- Safety Function Deployment (SFD). The results of SFD will be used in a multi-criteria decision-support system to develop find an optimal route for HazMats transportation. The proposed approach will be demonstrated through a hypothetical transportation case in Kazakhstan.Keywords: hazardous materials, risk assessment, risk management, quality function deployment
Procedia PDF Downloads 14333618 Evaluation of Three Potato Cultivars for Processing (Crisp French Fries)
Authors: Hatim Bastawi
Abstract:
Three varieties of potatoes, namely Agria, Alpha and Diamant were evaluated for their suitability for industrial production of French fries. The evaluation was under taken after testing quality parameters of specific gravity, dry matter, peeling ratio, and defect after frying and panel test. The variety Agria ranked the best followed by Alpha with regard to the parameters tested. On the other hand, Diamant showed significantly higher defect percentage than the other cultivars. Also, it was significantly judged of low acceptance by panelists.Keywords: cultivars, crisps, French fries
Procedia PDF Downloads 26133617 Predicting the Human Impact of Natural Onset Disasters Using Pattern Recognition Techniques and Rule Based Clustering
Authors: Sara Hasani
Abstract:
This research focuses on natural sudden onset disasters characterised as ‘occurring with little or no warning and often cause excessive injuries far surpassing the national response capacities’. Based on the panel analysis of the historic record of 4,252 natural onset disasters between 1980 to 2015, a predictive method was developed to predict the human impact of the disaster (fatality, injured, homeless) with less than 3% of errors. The geographical dispersion of the disasters includes every country where the data were available and cross-examined from various humanitarian sources. The records were then filtered into 4252 records of the disasters where the five predictive variables (disaster type, HDI, DRI, population, and population density) were clearly stated. The procedure was designed based on a combination of pattern recognition techniques and rule-based clustering for prediction and discrimination analysis to validate the results further. The result indicates that there is a relationship between the disaster human impact and the five socio-economic characteristics of the affected country mentioned above. As a result, a framework was put forward, which could predict the disaster’s human impact based on their severity rank in the early hours of disaster strike. The predictions in this model were outlined in two worst and best-case scenarios, which respectively inform the lower range and higher range of the prediction. A necessity to develop the predictive framework can be highlighted by noticing that despite the existing research in literature, a framework for predicting the human impact and estimating the needs at the time of the disaster is yet to be developed. This can further be used to allocate the resources at the response phase of the disaster where the data is scarce.Keywords: disaster management, natural disaster, pattern recognition, prediction
Procedia PDF Downloads 15433616 Impact of Financial Inclusion on Gender Inequality: An Empirical Examination
Authors: Sumanta Kumar Saha, Jie Qin
Abstract:
This study analyzes the impact of financial inclusion on gender inequality in 126 countries belonging to different income groups during the 2005–2019 period. Due to its positive influence on poverty alleviation, economic growth, women empowerment, and income inequality reduction, financial inclusion may help reduce gender equality. This study constructs a novel composite financial inclusion index and applies both fixed-effect panel estimation and instrumental variable approach to examine the impact of financial inclusion on gender inequality. The results indicate that financial inclusion can reduce gender inequality in developing and low- and lower-middle-income countries, but not in higher-income countries. The impact is not always immediate. Past financial inclusion initiatives have a significant influence on future gender inequality. Financial inclusion is also significant if the poverty level is high and women's access to financial services is low compared to men. When the poverty level is low, or women have equal access to financial services, financial inclusion does not significantly affect gender inequality. The study finds that compulsory education and improvement in institutional quality promote gender equality in developing countries apart from financial inclusion. The study proposes that lower-income countries use financial inclusion initiatives to improve gender equality. Other countries need to focus on other aspects such as promoting educational support and institutional quality improvements to achieve gender equality.Keywords: financial inclusion, gender inequality, institutional quality, women empowerment
Procedia PDF Downloads 13133615 Decision Making System for Clinical Datasets
Authors: P. Bharathiraja
Abstract:
Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.Keywords: decision making, data mining, normalization, fuzzy rule, classification
Procedia PDF Downloads 51933614 Geostatistical and Geochemical Study of the Aquifer System Waters Complex Terminal in the Valley of Oued Righ-Arid Area Algeria
Authors: Asma Bettahar, Imed Eddine Nezli, Sameh Habes
Abstract:
Groundwater resources in the Oued Righ valley are represented like the parts of the eastern basin of the Algerian Sahara, superposed by two major aquifers: the Intercalary Continental (IC) and the Terminal Complex (TC). From a qualitative point of view, various studies have highlighted that the waters of this region showed excessive mineralization, including the waters of the terminal complex (EC Avg equal 5854.61 S/cm) .The present article is a statistical approach by two multi methods various complementary (ACP, CAH), applied to the analytical data of multilayered aquifer waters Terminal Complex of the Oued Righ valley. The approach is to establish a correlation between the chemical composition of water and the lithological nature of different aquifer levels formations, and predict possible connection between groundwater’s layers. The results show that the mineralization of water is from geological origin. They concern the composition of the layers that make up the complex terminal.Keywords: complex terminal, mineralization, oued righ, statistical approach
Procedia PDF Downloads 38833613 Sensitivity Analysis during the Optimization Process Using Genetic Algorithms
Authors: M. A. Rubio, A. Urquia
Abstract:
Genetic algorithms (GA) are applied to the solution of high-dimensional optimization problems. Additionally, sensitivity analysis (SA) is usually carried out to determine the effect on optimal solutions of changes in parameter values of the objective function. These two analyses (i.e., optimization and sensitivity analysis) are computationally intensive when applied to high-dimensional functions. The approach presented in this paper consists in performing the SA during the GA execution, by statistically analyzing the data obtained of running the GA. The advantage is that in this case SA does not involve making additional evaluations of the objective function and, consequently, this proposed approach requires less computational effort than conducting optimization and SA in two consecutive steps.Keywords: optimization, sensitivity, genetic algorithms, model calibration
Procedia PDF Downloads 43733612 Exploring Socio-Economic Barriers of Green Entrepreneurship in Iran and Their Interactions Using Interpretive Structural Modeling
Authors: Younis Jabarzadeh, Rahim Sarvari, Negar Ahmadi Alghalandis
Abstract:
Entrepreneurship at both individual and organizational level is one of the most driving forces in economic development and leads to growth and competition, job generation and social development. Especially in developing countries, the role of entrepreneurship in economic and social prosperity is more emphasized. But the effect of global economic development on the environment is undeniable, especially in negative ways, and there is a need to rethink current business models and the way entrepreneurs act to introduce new businesses to address and embed environmental issues in order to achieve sustainable development. In this paper, green or sustainable entrepreneurship is addressed in Iran to identify challenges and barriers entrepreneurs in the economic and social sectors face in developing green business solutions. Sustainable or green entrepreneurship has been gaining interest among scholars in recent years and addressing its challenges and barriers need much more attention to fill the gap in the literature and facilitate the way those entrepreneurs are pursuing. This research comprised of two main phases: qualitative and quantitative. At qualitative phase, after a thorough literature review, fuzzy Delphi method is utilized to verify those challenges and barriers by gathering a panel of experts and surveying them. In this phase, several other contextually related factors were added to the list of identified barriers and challenges mentioned in the literature. Then, at the quantitative phase, Interpretive Structural Modeling is applied to construct a network of interactions among those barriers identified at the previous phase. Again, a panel of subject matter experts comprised of academic and industry experts was surveyed. The results of this study can be used by policymakers in both the public and industry sector, to introduce more systematic solutions to eliminate those barriers and help entrepreneurs overcome challenges of sustainable entrepreneurship. It also contributes to the literature as the first research in this type which deals with the barriers of sustainable entrepreneurship and explores their interaction.Keywords: green entrepreneurship, barriers, fuzzy Delphi method, interpretive structural modeling
Procedia PDF Downloads 16733611 Simulation-Based Unmanned Surface Vehicle Design Using PX4 and Robot Operating System With Kubernetes and Cloud-Native Tooling
Authors: Norbert Szulc, Jakub Wilk, Franciszek Górski
Abstract:
This paper presents an approach for simulating and testing robotic systems based on PX4, using a local Kubernetes cluster. The approach leverages modern cloud-native tools and runs on single-board computers. Additionally, this solution enables the creation of datasets for computer vision and the evaluation of control system algorithms in an end-to-end manner. This paper compares this approach to method commonly used Docker based approach. This approach was used to develop simulation environment for an unmanned surface vehicle (USV) for RoboBoat 2023 by running a containerized configuration of the PX4 Open-source Autopilot connected to ROS and the Gazebo simulation environment.Keywords: cloud computing, Kubernetes, single board computers, simulation, ROS
Procedia PDF Downloads 7733610 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 14533609 Cardiokey: A Binary and Multi-Class Machine Learning Approach to Identify Individuals Using Electrocardiographic Signals on Wearable Devices
Authors: S. Chami, J. Chauvin, T. Demarest, Stan Ng, M. Straus, W. Jahner
Abstract:
Biometrics tools such as fingerprint and iris are widely used in industry to protect critical assets. However, their vulnerability and lack of robustness raise several worries about the protection of highly critical assets. Biometrics based on Electrocardiographic (ECG) signals is a robust identification tool. However, most of the state-of-the-art techniques have worked on clinical signals, which are of high quality and less noisy, extracted from wearable devices like a smartwatch. In this paper, we are presenting a complete machine learning pipeline that identifies people using ECG extracted from an off-person device. An off-person device is a wearable device that is not used in a medical context such as a smartwatch. In addition, one of the main challenges of ECG biometrics is the variability of the ECG of different persons and different situations. To solve this issue, we proposed two different approaches: per person classifier, and one-for-all classifier. The first approach suggests making binary classifier to distinguish one person from others. The second approach suggests a multi-classifier that distinguishes the selected set of individuals from non-selected individuals (others). The preliminary results, the binary classifier obtained a performance 90% in terms of accuracy within a balanced data. The second approach has reported a log loss of 0.05 as a multi-class score.Keywords: biometrics, electrocardiographic, machine learning, signals processing
Procedia PDF Downloads 14233608 Sentiment Analysis of Ensemble-Based Classifiers for E-Mail Data
Authors: Muthukumarasamy Govindarajan
Abstract:
Detection of unwanted, unsolicited mails called spam from email is an interesting area of research. It is necessary to evaluate the performance of any new spam classifier using standard data sets. Recently, ensemble-based classifiers have gained popularity in this domain. In this research work, an efficient email filtering approach based on ensemble methods is addressed for developing an accurate and sensitive spam classifier. The proposed approach employs Naive Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA) as base classifiers along with different ensemble methods. The experimental results show that the ensemble classifier was performing with accuracy greater than individual classifiers, and also hybrid model results are found to be better than the combined models for the e-mail dataset. The proposed ensemble-based classifiers turn out to be good in terms of classification accuracy, which is considered to be an important criterion for building a robust spam classifier.Keywords: accuracy, arcing, bagging, genetic algorithm, Naive Bayes, sentiment mining, support vector machine
Procedia PDF Downloads 14333607 Optimizing Telehealth Internet of Things Integration: A Sustainable Approach through Fog and Cloud Computing Platforms for Energy Efficiency
Authors: Yunyong Guo, Sudhakar Ganti, Bryan Guo
Abstract:
The swift proliferation of telehealth Internet of Things (IoT) devices has sparked concerns regarding energy consumption and the need for streamlined data processing. This paper presents an energy-efficient model that integrates telehealth IoT devices into a platform based on fog and cloud computing. This integrated system provides a sustainable and robust solution to address the challenges. Our model strategically utilizes fog computing as a localized data processing layer and leverages cloud computing for resource-intensive tasks, resulting in a significant reduction in overall energy consumption. The incorporation of adaptive energy-saving strategies further enhances the efficiency of our approach. Simulation analysis validates the effectiveness of our model in improving energy efficiency for telehealth IoT systems, particularly when integrated with localized fog nodes and both private and public cloud infrastructures. Subsequent research endeavors will concentrate on refining the energy-saving model, exploring additional functional enhancements, and assessing its broader applicability across various healthcare and industry sectors.Keywords: energy-efficient, fog computing, IoT, telehealth
Procedia PDF Downloads 7933606 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 4333605 Government Big Data Ecosystem: A Systematic Literature Review
Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis
Abstract:
Data that is high in volume, velocity, veracity and comes from a variety of sources is usually generated in all sectors including the government sector. Globally public administrations are pursuing (big) data as new technology and trying to adopt a data-centric architecture for hosting and sharing data. Properly executed, big data and data analytics in the government (big) data ecosystem can be led to data-driven government and have a direct impact on the way policymakers work and citizens interact with governments. In this research paper, we conduct a systematic literature review. The main aims of this paper are to highlight essential aspects of the government (big) data ecosystem and to explore the most critical socio-technical factors that contribute to the successful implementation of government (big) data ecosystem. The essential aspects of government (big) data ecosystem include definition, data types, data lifecycle models, and actors and their roles. We also discuss the potential impact of (big) data in public administration and gaps in the government data ecosystems literature. As this is a new topic, we did not find specific articles on government (big) data ecosystem and therefore focused our research on various relevant areas like humanitarian data, open government data, scientific research data, industry data, etc.Keywords: applications of big data, big data, big data types. big data ecosystem, critical success factors, data-driven government, egovernment, gaps in data ecosystems, government (big) data, literature review, public administration, systematic review
Procedia PDF Downloads 23233604 Decision-Making Process and Its Method: Effective Usage Strategies
Authors: Kubra Korkmaz Onat
Abstract:
Decision-making significantly influences outcomes and shapes future actions, making it a crucial aspect of both personal and professional life. This study examines various decision-making approaches, focusing on their procedures and applications. The rational decision-making model is highlighted for its systematic approach and reliance on data analysis and logical reasoning. Additionally, the study explores consensus, weighted scoring, voting, and brainstorming analysis methods. Key findings indicate that each method has unique strengths and is best suited for specific contexts. The article concludes by offering practical guidance for how to choose the appropriate decision-making approach based on the circumstances.Keywords: decision-making, decision-making process, decision-making methods, group decision-making
Procedia PDF Downloads 833603 A Machine Learning Decision Support Framework for Industrial Engineering Purposes
Authors: Anli Du Preez, James Bekker
Abstract:
Data is currently one of the most critical and influential emerging technologies. However, the true potential of data is yet to be exploited since, currently, about 1% of generated data are ever actually analyzed for value creation. There is a data gap where data is not explored due to the lack of data analytics infrastructure and the required data analytics skills. This study developed a decision support framework for data analytics by following Jabareen’s framework development methodology. The study focused on machine learning algorithms, which is a subset of data analytics. The developed framework is designed to assist data analysts with little experience, in choosing the appropriate machine learning algorithm given the purpose of their application.Keywords: Data analytics, Industrial engineering, Machine learning, Value creation
Procedia PDF Downloads 168