Search results for: iterative algorithms
331 Bridge Healthcare Access Gap with Artifical Intelligence
Authors: Moshmi Sangavarapu
Abstract:
The US healthcare industry has undergone tremendous digital transformation in recent years, but critical care access to lower-income ethnicities is still in its nascency. This population has historically showcased substantial hesitation to seek any medical assistance. While the lack of sufficient financial resources plays a critical role, the existing cultural and knowledge barriers also contribute significantly to widening the access gap. It is imperative to break these barriers to ensure timely access to therapeutic procedures that can save important lives! Based on ongoing research, healthcare access barriers can be best addressed by tapping the untapped potential of caregiver communities first. They play a critical role in patients’ diagnoses, building healthcare knowledge and instilling confidence in required therapeutic procedures. Recent technological advancements have opened many avenues by developing smart ways of reaching the large caregiver community. A digitized go-to-market strategy featuring connected media coupled with smart IoT devices and geo-location targeting can be collectively leveraged to reach this key audience group. AI/ML algorithms can be thoroughly trained to identify relevant data signals from users' location and browsing behavior and determine useful marketing touchpoints. The web behavior can be further assimilated with natural language processing to identify contextually relevant interest topics and decipher potential caregivers on digital avenues to serve that brand message. In conclusion, grasping the true health access journey of any lower-income ethnic group is important to design beneficial touchpoints that can alleviate patients’ concerns and allow them to break their own access barriers and opt for timely and quality healthcare.Keywords: healthcare access, market access, diversity barriers, patient journey
Procedia PDF Downloads 56330 The Impact of Intelligent Control Systems on Biomedical Engineering and Research
Authors: Melkamu Tadesse Getachew
Abstract:
Intelligent control systems have revolutionized biomedical engineering, advancing research and enhancing medical practice. This review paper examines the impact of intelligent control on various aspects of biomedical engineering. It analyzes how these systems enhance precision and accuracy in biomedical instrumentation, improving diagnostics, monitoring, and treatment. Integration challenges are addressed, and potential solutions are proposed. The paper also investigates the optimization of drug delivery systems through intelligent control. It explores how intelligent systems contribute to precise dosing, targeted drug release, and personalized medicine. Challenges related to controlled drug release and patient variability are discussed, along with potential avenues for overcoming them. The comparison of algorithms used in intelligent control systems in biomedical control is also reviewed. The implications of intelligent control in computational and systems biology are explored, showcasing how these systems enable enhanced analysis and prediction of complex biological processes. Challenges such as interpretability, human-machine interaction, and machine reliability are examined, along with potential solutions. Intelligent control in biomedical engineering also plays a crucial role in risk management during surgical operations. This section demonstrates how intelligent systems improve patient safety and surgical outcomes when integrated into surgical robots, augmented reality, and preoperative planning. The challenges associated with these implementations and potential solutions are discussed in detail. In summary, this review paper comprehensively explores the widespread impact of intelligent control on biomedical engineering, showing the future of human health issues promising. It discusses application areas, challenges, and potential solutions, highlighting the transformative potential of these systems in advancing research and improving medical practice.Keywords: Intelligent control systems, biomedical instrumentation, drug delivery systems, robotic surgical instruments, Computational monitoring and modeling
Procedia PDF Downloads 47329 The Implications of Technological Advancements on the Constitutional Principles of Contract Law
Authors: Laura Çami (Vorpsi), Xhon Skënderi
Abstract:
In today's rapidly evolving technological landscape, the traditional principles of contract law are facing significant challenges. The emergence of new technologies, such as electronic signatures, smart contracts, and online dispute resolution mechanisms, is transforming the way contracts are formed, interpreted, and enforced. This paper examines the implications of these technological advancements on the constitutional principles of contract law. One of the fundamental principles of contract law is freedom of contract, which ensures that parties have the autonomy to negotiate and enter into contracts as they see fit. However, the use of technology in the contracting process has the potential to disrupt this principle. For example, online platforms and marketplaces often offer standard-form contracts, which may not reflect the specific needs or interests of individual parties. This raises questions about the equality of bargaining power between parties and the extent to which parties are truly free to negotiate the terms of their contracts. Another important principle of contract law is the requirement of consideration, which requires that each party receives something of value in exchange for their promise. The use of digital assets, such as cryptocurrencies, has created new challenges in determining what constitutes valuable consideration in a contract. Due to the ambiguity in this area, disagreements about the legality and enforceability of such contracts may arise. Furthermore, the use of technology in dispute resolution mechanisms, such as online arbitration and mediation, may raise concerns about due process and access to justice. The use of algorithms and artificial intelligence to determine the outcome of disputes may also raise questions about the impartiality and fairness of the process. Finally, it should be noted that there are many different and complex effects of technical improvements on the fundamental constitutional foundations of contract law. As technology continues to evolve, it will be important for policymakers and legal practitioners to consider the potential impacts on contract law and to ensure that the principles of fairness, equality, and access to justice are preserved in the contracting process.Keywords: technological advancements, constitutional principles, contract law, smart contracts, online dispute resolution, freedom of contract
Procedia PDF Downloads 152328 Modelling, Assessment, and Optimisation of Rules for Selected Umgeni Water Distribution Systems
Authors: Khanyisile Mnguni, Muthukrishnavellaisamy Kumarasamy, Jeff C. Smithers
Abstract:
Umgeni Water is a water board that supplies most parts of KwaZulu Natal with bulk portable water. Currently, Umgeni Water is running its distribution system based on required reservoir levels and demands and does not consider the energy cost at different times of the day, number of pump switches, and background leakages. Including these constraints can reduce operational cost, energy usage, leakages, and increase performance. Optimising pump schedules can reduce energy usage and costs while adhering to hydraulic and operational constraints. Umgeni Water has installed an online hydraulic software, WaterNet Advisor, that allows running different operational scenarios prior to implementation in order to optimise the distribution system. This study will investigate operation scenarios using optimisation techniques and WaterNet Advisor for a local water distribution system. Based on studies reported in the literature, introducing pump scheduling optimisation can reduce energy usage by approximately 30% without any change in infrastructure. Including tariff structures in an optimisation problem can reduce pumping costs by 15%, while including leakages decreases cost by 10%, and pressure drop in the system can be up to 12 m. Genetical optimisation algorithms are widely used due to their ability to solve nonlinear, non-convex, and mixed-integer problems. Other methods such as branch and bound linear programming have also been successfully used. A suitable optimisation method will be chosen based on its efficiency. The objective of the study is to reduce energy usage, operational cost, and leakages, and the feasibility of optimal solution will be checked using the Waternet Advisor. This study will provide an overview of the optimisation of hydraulic networks and progress made to date in multi-objective optimisation for a selected sub-system operated by Umgeni Water.Keywords: energy usage, pump scheduling, WaterNet Advisor, leakages
Procedia PDF Downloads 94327 VeriFy: A Solution to Implement Autonomy Safely and According to the Rules
Authors: Michael Naderhirn, Marco Pavone
Abstract:
Problem statement, motivation, and aim of work: So far, the development of control algorithms was done by control engineers in a way that the controller would fit a specification by testing. When it comes to the certification of an autonomous car in highly complex scenarios, the challenge is much higher since such a controller must mathematically guarantee to implement the rules of the road while on the other side guarantee aspects like safety and real time executability. What if it becomes reality to solve this demanding problem by combining Formal Verification and System Theory? The aim of this work is to present a workflow to solve the above mentioned problem. Summary of the presented results / main outcomes: We show the usage of an English like language to transform the rules of the road into system specification for an autonomous car. The language based specifications are used to define system functions and interfaces. Based on that a formal model is developed which formally correctly models the specifications. On the other side, a mathematical model describing the systems dynamics is used to calculate the systems reachability set which is further used to determine the system input boundaries. Then a motion planning algorithm is applied inside the system boundaries to find an optimized trajectory in combination with the formal specification model while satisfying the specifications. The result is a control strategy which can be applied in real time independent of the scenario with a mathematical guarantee to satisfy a predefined specification. We demonstrate the applicability of the method in simulation driving scenarios and a potential certification. Originality, significance, and benefit: To the authors’ best knowledge, it is the first time that it is possible to show an automated workflow which combines a specification in an English like language and a mathematical model in a mathematical formal verified way to synthesizes a controller for potential real time applications like autonomous driving.Keywords: formal system verification, reachability, real time controller, hybrid system
Procedia PDF Downloads 243326 A Personality-Based Behavioral Analysis on eSports
Authors: Halkiopoulos Constantinos, Gkintoni Evgenia, Koutsopoulou Ioanna, Antonopoulou Hera
Abstract:
E-sports and e-gaming have emerged in recent years since the increase in internet use have become universal and e-gamers are the new reality in our homes. The excessive involvement of young adults with e-sports has already been revealed and the adverse consequences have been reported in researches in the past few years, but the issue has not been fully studied yet. The present research is conducted in Greece and studies the psychological profile of video game players and provides information on personality traits, habits and emotional status that affect online gamers’ behaviors in order to help professionals and policy makers address the problem. Three standardized self-report questionnaires were administered to participants who were young male and female adults aged from 19-26 years old. The Profile of Mood States (POMS) scale was used to evaluate people’s perceptions of their everyday life mood; the personality features that can trace back to people’s habits and anticipated reactions were measured by Eysenck Personality Questionnaire (EPQ), and the Trait Emotional Intelligence Questionnaire (TEIQue) was used to measure which cognitive (gamers’ beliefs) and emotional parameters (gamers’ emotional abilities) mainly affected/ predicted gamers’ behaviors and leisure time activities?/ gaming behaviors. Data mining techniques were used to analyze the data, which resulted in machine learning algorithms that were included in the software package R. The research findings attempt to designate the effect of personality traits, emotional status and emotional intelligence influence and correlation with e-sports, gamers’ behaviors and help policy makers and stakeholders take action, shape social policy and prevent the adverse consequences on young adults. The need for further research, prevention and treatment strategies is also addressed.Keywords: e-sports, e-gamers, personality traits, POMS, emotional intelligence, data mining, R
Procedia PDF Downloads 233325 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning
Authors: Abdullah Bal
Abstract:
This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification
Procedia PDF Downloads 24324 Optimization of Traffic Agent Allocation for Minimizing Bus Rapid Transit Cost on Simplified Jakarta Network
Authors: Gloria Patricia Manurung
Abstract:
Jakarta Bus Rapid Transit (BRT) system which was established in 2009 to reduce private vehicle usage and ease the rush hour gridlock throughout the Jakarta Greater area, has failed to achieve its purpose. With gradually increasing the number of private vehicles ownership and reduced road space by the BRT lane construction, private vehicle users intuitively invade the exclusive lane of BRT, creating local traffic along the BRT network. Invaded BRT lanes costs become the same with the road network, making BRT which is supposed to be the main public transportation in the city becoming unreliable. Efforts to guard critical lanes with preventing the invasion by allocating traffic agents at several intersections have been expended, lead to the improving congestion level along the lane. Given a set of number of traffic agents, this study uses an analytical approach to finding the best deployment strategy of traffic agent on a simplified Jakarta road network in minimizing the BRT link cost which is expected to lead to the improvement of BRT system time reliability. User-equilibrium model of traffic assignment is used to reproduce the origin-destination demand flow on the network and the optimum solution conventionally can be obtained with brute force algorithm. This method’s main constraint is that traffic assignment simulation time escalates exponentially with the increase of set of agent’s number and network size. Our proposed metaheuristic and heuristic algorithms perform linear simulation time increase and result in minimized BRT cost approaching to brute force algorithm optimization. Further analysis of the overall network link cost should be performed to see the impact of traffic agent deployment to the network system.Keywords: traffic assignment, user equilibrium, greedy algorithm, optimization
Procedia PDF Downloads 232323 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 309322 Optimizing The Residential Design Process Using Automated Technologies
Authors: Martin Georgiev, Milena Nanova, Damyan Damov
Abstract:
Architects, engineers, and developers need to analyse and implement a wide spectrum of data in different formats, if they want to produce viable residential developments. Usually, this data comes from a number of different sources and is not well structured. The main objective of this research project is to provide parametric tools working with real geodesic data that can generate residential solutions. Various codes, regulations and design constraints are described by variables and prioritized. In this way, we establish a common workflow for architects, geodesists, and other professionals involved in the building and investment process. This collaborative medium ensures that the generated design variants conform to various requirements, contributing to a more streamlined and informed decision-making process. The quantification of distinctive characteristics inherent to typical residential structures allows a systematic evaluation of the generated variants, focusing on factors crucial to designers, such as daylight simulation, circulation analysis, space utilization, view orientation, etc. Integrating real geodesic data offers a holistic view of the built environment, enhancing the accuracy and relevance of the design solutions. The use of generative algorithms and parametric models offers high productivity and flexibility of the design variants. It can be implemented in more conventional CAD and BIM workflow. Experts from different specialties can join their efforts, sharing a common digital workspace. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the building investment during its entire lifecycle.Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization
Procedia PDF Downloads 55321 Using Machine Learning to Classify Different Body Parts and Determine Healthiness
Authors: Zachary Pan
Abstract:
Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.Keywords: body part, healthcare, machine learning, neural networks
Procedia PDF Downloads 109320 Deep Learning Prediction of Residential Radon Health Risk in Canada and Sweden to Prevent Lung Cancer Among Non-Smokers
Authors: Selim M. Khan, Aaron A. Goodarzi, Joshua M. Taron, Tryggve Rönnqvist
Abstract:
Indoor air quality, a prime determinant of health, is strongly influenced by the presence of hazardous radon gas within the built environment. As a health issue, dangerously high indoor radon arose within the 20th century to become the 2nd leading cause of lung cancer. While the 21st century building metrics and human behaviors have captured, contained, and concentrated radon to yet higher and more hazardous levels, the issue is rapidly worsening in Canada. It is established that Canadians in the Prairies are the 2nd highest radon-exposed population in the world, with 1 in 6 residences experiencing 0.2-6.5 millisieverts (mSv) radiation per week, whereas the Canadian Nuclear Safety Commission sets maximum 5-year occupational limits for atomic workplace exposure at only 20 mSv. This situation is also deteriorating over time within newer housing stocks containing higher levels of radon. Deep machine learning (LSTM) algorithms were applied to analyze multiple quantitative and qualitative features, determine the most important contributory factors, and predicted radon levels in the known past (1990-2020) and projected future (2021-2050). The findings showed gradual downwards patterns in Sweden, whereas it would continue to go from high to higher levels in Canada over time. The contributory factors found to be the basement porosity, roof insulation depthness, R-factor, and air dynamics of the indoor environment related to human window opening behaviour. Building codes must consider including these factors to ensure adequate indoor ventilation and healthy living that can prevent lung cancer in non-smokers.Keywords: radon, building metrics, deep learning, LSTM prediction model, lung cancer, canada, sweden
Procedia PDF Downloads 112319 Sentiment Analysis of Social Media Responses: A Comparative Study of (NDA) and Indian National Developmental Inclusive Alliance (INDIA) during Indian General Elections 2024
Authors: Pankaj Dhiman, Simranjeet Kaur
Abstract:
This research paper presents a comprehensive sentiment analysis of social media responses to videos on Facebook, YouTube, Twitter, and Instagram during the 2024 Indian general elections. The study focuses on the sentiment patterns of voters towards the National Democratic Alliance (NDA) and The Indian National Developmental Inclusive Alliance (INDIA) on these platforms. The analysis aims to understand the impact of social media on voter sentiment and its correlation with the election outcome. The study employed a mixed-methods approach, combining both quantitative and qualitative methods. With a total of 200 posts analysed during general election-2024 final phase, the sentiment analysis was conducted using natural language processing (NLP) techniques, including sentiment dictionaries and machine learning algorithms. The results show that NDA received significantly more positive sentiment responses across all platforms, with a positive sentiment score of 47% compared to INDIA's score of 38.98 %. The analysis also revealed that Twitter and YouTube were the most influential platforms in shaping voter sentiment, with 60% of the total sentiment score coming from these two platforms. The study's findings suggest that social media sentiment analysis can be a valuable tool for understanding voter sentiment and predicting election outcomes. The results also highlight the importance of social media in shaping public opinion and the need for political parties to engage effectively with voters on these platforms. The study's implications are significant, as they indicate that social media can be a key factor in determining the outcome of elections. The findings also underscore the need for political parties to develop effective social media strategies to engage with voters and shape public opinion.Keywords: Indian Elections-2024, NDA, INDIA, sentiment analysis, social media, democracy
Procedia PDF Downloads 56318 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery
Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas
Abstract:
The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition
Procedia PDF Downloads 150317 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data
Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill
Abstract:
Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function
Procedia PDF Downloads 280316 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning
Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic
Abstract:
Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method
Procedia PDF Downloads 251315 Online Dietary Management System
Authors: Kyle Yatich Terik, Collins Oduor
Abstract:
The current healthcare system has made healthcare more accessible and efficient by the use of information technology through the implementation of computer algorithms that generate menus based on the diagnosis. While many systems just like these have been created over the years, their main objective is to help healthy individuals calculate their calorie intake and assist them by providing food selections based on a pre-specified calorie. That application has been proven to be useful in some ways, and they are not suitable for monitoring, planning, and managing hospital patients, especially that critical condition their dietary needs. The system also addresses a number of objectives, such as; the main objective is to be able to design, develop and implement an efficient, user-friendly as well as and interactive dietary management system. The specific design development objectives include developing a system that will facilitate a monitoring feature for users using graphs, developing a system that will provide system-generated reports to the users, dietitians, and system admins, design a system that allows users to measure their BMI (Body Mass Index), the system will also provide food template feature that will guide the user on a balanced diet plan. In order to develop the system, further research was carried out in Kenya, Nairobi County, using online questionnaires being the preferred research design approach. From the 44 respondents, one could create discussions such as the major challenges encountered from the manual dietary system, which include no easily accessible information of the calorie intake for food products, expensive to physically visit a dietitian to create a tailored diet plan. Conclusively, the system has the potential of improving the quality of life of people as a whole by providing a standard for healthy living and allowing individuals to have readily available knowledge through food templates that will guide people and allow users to create their own diet plans that consist of a balanced diet.Keywords: DMS, dietitian, patient, administrator
Procedia PDF Downloads 161314 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations
Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang
Abstract:
Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.Keywords: source identification, ordinary differential equations, label propagation, complex networks
Procedia PDF Downloads 22313 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 117312 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration
Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef
Abstract:
Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab
Procedia PDF Downloads 382311 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm
Authors: El Harraj Abdeslam, Raissouni Naoufal
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes
Procedia PDF Downloads 258310 Infrared Spectroscopy in Tandem with Machine Learning for Simultaneous Rapid Identification of Bacteria Isolated Directly from Patients' Urine Samples and Determination of Their Susceptibility to Antibiotics
Authors: Mahmoud Huleihel, George Abu-Aqil, Manal Suleiman, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman
Abstract:
Urinary tract infections (UTIs) are considered to be the most common bacterial infections worldwide, which are caused mainly by Escherichia (E.) coli (about 80%). Klebsiella pneumoniae (about 10%) and Pseudomonas aeruginosa (about 6%). Although antibiotics are considered as the most effective treatment for bacterial infectious diseases, unfortunately, most of the bacteria already have developed resistance to the majority of the commonly available antibiotics. Therefore, it is crucial to identify the infecting bacteria and to determine its susceptibility to antibiotics for prescribing effective treatment. Classical methods are time consuming, require ~48 hours for determining bacterial susceptibility. Thus, it is highly urgent to develop a new method that can significantly reduce the time required for determining both infecting bacterium at the species level and diagnose its susceptibility to antibiotics. Fourier-Transform Infrared (FTIR) spectroscopy is well known as a sensitive and rapid method, which can detect minor molecular changes in bacterial genome associated with the development of resistance to antibiotics. The main goal of this study is to examine the potential of FTIR spectroscopy, in tandem with machine learning algorithms, to identify the infected bacteria at the species level and to determine E. coli susceptibility to different antibiotics directly from patients' urine in about 30minutes. For this goal, 1600 different E. coli isolates were isolated for different patients' urine sample, measured by FTIR, and analyzed using different machine learning algorithm like Random Forest, XGBoost, and CNN. We achieved 98% success in isolate level identification and 89% accuracy in susceptibility determination.Keywords: urinary tract infections (UTIs), E. coli, Klebsiella pneumonia, Pseudomonas aeruginosa, bacterial, susceptibility to antibiotics, infrared microscopy, machine learning
Procedia PDF Downloads 170309 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm
Authors: Lydia Novozhilova, Vladimir Urazhdin
Abstract:
An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier
Procedia PDF Downloads 328308 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification
Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran
Abstract:
The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM
Procedia PDF Downloads 251307 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models
Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti
Abstract:
In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics
Procedia PDF Downloads 55306 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 224305 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images
Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu
Abstract:
The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.Keywords: level set model, multi-temporal image, lake contour extraction, contour update
Procedia PDF Downloads 366304 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography
Authors: O’Day Luke
Abstract:
Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison
Procedia PDF Downloads 142303 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance
Authors: Sokkhey Phauk, Takeo Okazaki
Abstract:
The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.Keywords: academic performance prediction system, educational data mining, dominant factors, feature selection method, prediction model, student performance
Procedia PDF Downloads 107302 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms
Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano
Abstract:
In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.Keywords: heuristic, MIP model, remedial course, school, timetabling
Procedia PDF Downloads 605