Search results for: cluster model approach
26481 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data
Authors: Jian-Heng Wu, Bor-Shen Lin
Abstract:
The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.Keywords: water mass, Gaussian mixture model, data visualization, system framework
Procedia PDF Downloads 14426480 Metabolic Predictive Model for PMV Control Based on Deep Learning
Authors: Eunji Choi, Borang Park, Youngjae Choi, Jinwoo Moon
Abstract:
In this study, a predictive model for estimating the metabolism (MET) of human body was developed for the optimal control of indoor thermal environment. Human body images for indoor activities and human body joint coordinated values were collected as data sets, which are used in predictive model. A deep learning algorithm was used in an initial model, and its number of hidden layers and hidden neurons were optimized. Lastly, the model prediction performance was analyzed after the model being trained through collected data. In conclusion, the possibility of MET prediction was confirmed, and the direction of the future study was proposed as developing various data and the predictive model.Keywords: deep learning, indoor quality, metabolism, predictive model
Procedia PDF Downloads 25726479 Towards a Rigorous Analysis for a Supercritical Particulate Process
Authors: Yousef Bakhbakhi
Abstract:
Crystallization with supercritical fluids (SCFs), as a developed technology to produce particles of micron and sub-micron size with narrow size distribution, has found appreciable importance as an environmentally friendly technology. Particle synthesis using SCFs can be achieved employing a number of special processes involving solvent and antisolvent mechanisms. In this study, the compressed antisolvent (PCA) process is utilized as a model to analyze the theoretical complexity of crystallization with supercritical fluids. The population balance approach has proven to be an effectual technique to simulate and predict the particle size and size distribution. The nucleation and growth mechanisms of the particles formation in the PCA process is investigated using the population balance equation, which describes the evolution of the particle through coalescence and breakup levels with time. The employed mathematical population balance model contains a set of the partial differential equation with algebraic constraints, which demands a rigorous numerical approach. The combined Collocation and Galerkin finite element method are proposed as a high-resolution technique to solve the dynamics of the PCA process.Keywords: particle formation, particle size and size distribution, PCA, supercritical carbon dioxide
Procedia PDF Downloads 19726478 Investigating the Experiences of Higher Education Academics on the Blended Approach Used during the Induction Course
Authors: Ann-May Marais
Abstract:
South African higher education institutions are following the global adoption of a blended approach to teaching and learning. Blended learning is viewed as a transformative teaching-learning approach, as it provides students with the optimum experience by mixing the best of face-to-face and online learning. Although academics realise the benefits of blended learning, they find it challenging and time-consuming to implement blended strategies. Professional development is a critical component of the adoption of higher education teaching-learning approaches. The Institutional course for higher education academics offered at a South African University was designed in a blended model, implemented and evaluated. This paper reports on a study that investigated the experiences of academics on the blended approach used during the induction course. A qualitative design-based research methodology was employed, and data was collected using participant feedback and document analysis. The data gathered from each of the four ICNL offerings were used to inform the design of the next course. Findings indicated that lecturers realised that blended learning could cater to student diversity, different learning styles, engagement, and innovation. Furthermore, it emerged that the course has to cater for diversity in technology proficiency and readiness of participants. Participants also require ongoing support in technology usage and discipline-specific blended learning workshops. This paper contends that the modelling of a blended approach to professional development can be an effective way to motivate academics to apply blended learning in their teaching-learning experiences.Keywords: blended learning, professional development, induction course, integration of technology
Procedia PDF Downloads 16226477 The Fragility of Sense: The Twofold Temporality of Embodiment and Its Role for Depression
Authors: Laura Bickel
Abstract:
This paper aims to investigate to what extent Merleau-Ponty’s philosophy of body memory serves as a viable resource for the enactive approach to cognitive science and its first-person experience-based research on ‘recurrent depressive disorder’ coded F33 in ICD-10. In pursuit of this goal, the analysis begins by revisiting the neuroreductive paradigm. This paradigm serves biological psychiatry to explain the condition of vital contact in terms of underlying neurophysiological mechanisms. It is demonstrated that the neuroreductive model cannot sufficiently account for the depressed person’s episodical withdrawal in causal terms. The analysis of the irregular loss of vital resonance requires integrating the body as the subject of experience and its phenomenological time. Then, it is shown that the enactive approach to depression as disordered sense-making is a promising alternative. The enactive model of perception implies that living beings do not register pre-existing meaning ‘out there’ but unfold ‘sense’ in their action-oriented response to the world. For the enactive approach, Husserl’s passive synthesis of inner time consciousness is fundamental for what becomes perceptually present for action. It seems intuitive to bring together the enactive approach to depression with the long-standing view in phenomenological psychopathology that explains the loss of vital contact by appealing to the disruption of the temporal structure of consciousness. However, this paper argues that the disruption of the temporal structure is not justified conceptually. Instead, one may integrate Merleau-Ponty’s concept of the past as the unconscious into the enactive approach to depression. From this perspective, the living being’s experiential and biological past inserts itself in the form of habit and bodily skills and ensures action-oriented responses to the environment. Finally, it is concluded that the depressed person’s withdrawal indicates the impairment of this application process. The person suffering from F33 cannot actualize sedimented meaning to respond to the valences and tasks of a given situation.Keywords: depression, enactivism, neuroreductionsim, phenomenology, temporality
Procedia PDF Downloads 13226476 Model Averaging in a Multiplicative Heteroscedastic Model
Authors: Alan Wan
Abstract:
In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk
Procedia PDF Downloads 38426475 Sensitivity Analysis during the Optimization Process Using Genetic Algorithms
Authors: M. A. Rubio, A. Urquia
Abstract:
Genetic algorithms (GA) are applied to the solution of high-dimensional optimization problems. Additionally, sensitivity analysis (SA) is usually carried out to determine the effect on optimal solutions of changes in parameter values of the objective function. These two analyses (i.e., optimization and sensitivity analysis) are computationally intensive when applied to high-dimensional functions. The approach presented in this paper consists in performing the SA during the GA execution, by statistically analyzing the data obtained of running the GA. The advantage is that in this case SA does not involve making additional evaluations of the objective function and, consequently, this proposed approach requires less computational effort than conducting optimization and SA in two consecutive steps.Keywords: optimization, sensitivity, genetic algorithms, model calibration
Procedia PDF Downloads 43626474 Modeling Operating Theater Scheduling and Configuration: An Integrated Model in Health-Care Logistics
Authors: Sina Keyhanian, Abbas Ahmadi, Behrooz Karimi
Abstract:
We present a multi-objective binary programming model which considers surgical cases are scheduling among operating rooms and the configuration of surgical instruments in limited capacity hospital trays, simultaneously. Many mathematical models have been developed previously in the literature addressing different challenges in health-care logistics such as assigning operating rooms, leveling beds, etc. But what happens inside the operating rooms along with the inventory management of required instruments for various operations, and also their integration with surgical scheduling have been poorly discussed. Our model considers the minimization of movements between trays during a surgery which recalls the famous cell formation problem in group technology. This assumption can also provide a major potential contribution to robotic surgeries. The tray configuration problem which consumes surgical instruments requirement plan (SIRP) and sequence of surgical procedures based on required instruments (SIRO) is nested inside the bin packing problem. This modeling approach helps us understand that most of the same-output solutions will not be necessarily identical when it comes to the rearrangement of surgeries among rooms. A numerical example has been dealt with via a proposed nested simulated annealing (SA) optimization approach which provides insights about how various configurations inside a solution can alter the optimal condition.Keywords: health-care logistics, hospital tray configuration, off-line bin packing, simulated annealing optimization, surgical case scheduling
Procedia PDF Downloads 28226473 A Metaheuristic for the Layout and Scheduling Problem in a Job Shop Environment
Authors: Hernández Eva Selene, Reyna Mary Carmen, Rivera Héctor, Barragán Irving
Abstract:
We propose an approach that jointly addresses the layout of a facility and the scheduling of a sequence of jobs. In real production, these two problems are interrelated. However, they are treated separately in the literature. Our approach is an extension of the job shop problem with transportation delay, where the location of the machines is selected among possible sites. The model minimizes the makespan, using the short processing times rule with two algorithms; the first one considers all the permutations for the location of machines, and the second only a heuristic to select some specific permutations that reduces computational time. Some instances are proved and compared with literature.Keywords: layout problem, job shop scheduling problem, concurrent scheduling and layout problem, metaheuristic
Procedia PDF Downloads 60626472 Logistics Hub Location and Scheduling Model for Urban Last-Mile Deliveries
Authors: Anastasios Charisis, Evangelos Kaisar, Steven Spana, Lili Du
Abstract:
Logistics play a vital role in the prosperity of today’s cities, but current urban logistics practices are proving problematic, causing negative effects such as traffic congestion and environmental impacts. This paper proposes an alternative urban logistics system, leasing hubs inside cities for designated time intervals, and using handcarts for last-mile deliveries. A mathematical model for selecting the locations of hubs and allocating customers, while also scheduling the optimal times during the day for leasing hubs is developed. The proposed model is compared to current delivery methods requiring door-to-door truck deliveries. It is shown that truck traveled distances decrease by more than 60%. In addition, analysis shows that in certain conditions the approach can be economically competitive and successfully applied to address real problems.Keywords: hub location, last-mile, logistics, optimization
Procedia PDF Downloads 19426471 Reliability Prediction of Tires Using Linear Mixed-Effects Model
Authors: Myung Hwan Na, Ho- Chun Song, EunHee Hong
Abstract:
We widely use normal linear mixed-effects model to analysis data in repeated measurement. In case of detecting heteroscedasticity and the non-normality of the population distribution at the same time, normal linear mixed-effects model can give improper result of analysis. To achieve more robust estimation, we use heavy tailed linear mixed-effects model which gives more exact and reliable analysis conclusion than standard normal linear mixed-effects model.Keywords: reliability, tires, field data, linear mixed-effects model
Procedia PDF Downloads 56326470 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 13926469 A Mathematical Model of Blood Perfusion Dependent Temperature Distribution in Transient Case in Human Dermal Region
Authors: Yogesh Shukla
Abstract:
Many attempts have been made to study temperature distribution problem in human tissues under normal environmental and physiological conditions at constant arterial blood temperature. But very few attempts have been made to investigate temperature distribution in human tissues under different arterial blood temperature. In view of above, a finite element model has been developed to unsteady temperature distribution in dermal region in human body. The model has been developed for one dimension unsteady state case. The variation in parameters like thermal conductivity, blood mass flow and metabolic activity with respect to position and time has been incorporated in the model. Appropriate boundary conditions have been framed. The central difference approach has been used in space variable and trapezoidal rule has been employed a long time variable. Numerical results have been obtained to study relationship among temperature and time.Keywords: rate of metabolism, blood mass flow rate, thermal conductivity, heat generation, finite element method
Procedia PDF Downloads 35326468 Single Valued Neutrosophic Hesitant Fuzzy Rough Set and Its Application
Authors: K. M. Alsager, N. O. Alshehri
Abstract:
In this paper, we proposed the notion of single valued neutrosophic hesitant fuzzy rough set, by combining single valued neutrosophic hesitant fuzzy set and rough set. The combination of single valued neutrosophic hesitant fuzzy set and rough set is a powerful tool for dealing with uncertainty, granularity and incompleteness of knowledge in information systems. We presented both definition and some basic properties of the proposed model. Finally, we gave a general approach which is applied to a decision making problem in disease diagnoses, and demonstrated the effectiveness of the approach by a numerical example.Keywords: single valued neutrosophic fuzzy set, single valued neutrosophic fuzzy hesitant set, rough set, single valued neutrosophic hesitant fuzzy rough set
Procedia PDF Downloads 27226467 Quantitative Analysis of the Trade Potential of the United States with Members of the European Union: A Gravity Model Approach
Authors: Zahid Ahmad, Nauman Ali
Abstract:
This study has estimated the trade between USA and individual members of European Union using Gravity Model of Trade as The USA has a complex trade relationship with the European countries consist of a large number of consumers, which make USA dependent on EU for major of its total world trade. However, among the member of EU, the trade potential of USA with individual members of EU is not known. Panel data techniques e.g. Random Effect, Fixed Effect and Pooled Panel have been applied to secondary quantitative data to analyze the Trade between USA and EU. Trade Potential of USA with individual members of EU has been obtained using the ratio of Actual trade of USA with EU members and the trade as predicted by Gravity Model. The Study concluded that the USA has greater trade potential with 16 members of EU, including Croatia, Portugal and United Kingdom on top. On the other hand, Finland, Ireland, and France are the top countries with which the USA has exhaustive trade potential.Keywords: analytical technique, economic, gravity, international trade, significant
Procedia PDF Downloads 30526466 Online Learning for Modern Business Models: Theoretical Considerations and Algorithms
Authors: Marian Sorin Ionescu, Olivia Negoita, Cosmin Dobrin
Abstract:
This scientific communication reports and discusses learning models adaptable to modern business problems and models specific to digital concepts and paradigms. In the PAC (probably approximately correct) learning model approach, in which the learning process begins by receiving a batch of learning examples, the set of learning processes is used to acquire a hypothesis, and when the learning process is fully used, this hypothesis is used in the prediction of new operational examples. For complex business models, a lot of models should be introduced and evaluated to estimate the induced results so that the totality of the results are used to develop a predictive rule, which anticipates the choice of new models. In opposition, for online learning-type processes, there is no separation between the learning (training) and predictive phase. Every time a business model is approached, a test example is considered from the beginning until the prediction of the appearance of a model considered correct from the point of view of the business decision. After choosing choice a part of the business model, the label with the logical value "true" is known. Some of the business models are used as examples of learning (training), which helps to improve the prediction mechanisms for future business models.Keywords: machine learning, business models, convex analysis, online learning
Procedia PDF Downloads 14026465 Managing Core Competencies in Innovative Entrepreneurship: Theory and Practice
Authors: Olga Shvetsova
Abstract:
The research paper contains the different issues of competence management in innovation companies. The theoretical bases of human resources management and practical issues of innovative enterprises’ competitiveness are considered. The research is focused on the modern innovative enterprise management problems; it focuses on the effective management of the personnel of innovative enterprises on the basis of competence approach. The concept of core competence approach is discussed. The point of view, that the key competences of the company create the competitive advantages, support strategy development and protect business from external negative factors is considered. The used methodology is background research.Keywords: competence model, competitiveness, innovation management, implementation
Procedia PDF Downloads 31626464 Model-Free Distributed Control of Dynamical Systems
Authors: Javad Khazaei, Rick Blum
Abstract:
Distributed control is an efficient and flexible approach for coordination of multi-agent systems. One of the main challenges in designing a distributed controller is identifying the governing dynamics of the dynamical systems. Data-driven system identification is currently undergoing a revolution. With the availability of high-fidelity measurements and historical data, model-free identification of dynamical systems can facilitate the control design without tedious modeling of high-dimensional and/or nonlinear systems. This paper develops a distributed control design using consensus theory for linear and nonlinear dynamical systems using sparse identification of system dynamics. Compared with existing consensus designs that heavily rely on knowing the detailed system dynamics, the proposed model-free design can accurately capture the dynamics of the system with available measurements and input data and provide guaranteed performance in consensus and tracking problems. Heterogeneous damped oscillators are chosen as examples of dynamical system for validation purposes.Keywords: consensus tracking, distributed control, model-free control, sparse identification of dynamical systems
Procedia PDF Downloads 26526463 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 6426462 Towards a Measurement-Based E-Government Portals Maturity Model
Authors: Abdoullah Fath-Allah, Laila Cheikhi, Rafa E. Al-Qutaish, Ali Idri
Abstract:
The e-government emerging concept transforms the way in which the citizens are dealing with their governments. Thus, the citizens can execute the intended services online anytime and anywhere. This results in great benefits for both the governments (reduces the number of officers) and the citizens (more flexibility and time saving). Therefore, building a maturity model to assess the e-government portals becomes desired to help in the improvement process of such portals. This paper aims at proposing an e-government maturity model based on the measurement of the best practices’ presence. The main benefit of such maturity model is to provide a way to rank an e-government portal based on the used best practices, and also giving a set of recommendations to go to the higher stage in the maturity model.Keywords: best practices, e-government portal, maturity model, quality model
Procedia PDF Downloads 33826461 Self-Supervised Attributed Graph Clustering with Dual Contrastive Loss Constraints
Authors: Lijuan Zhou, Mengqi Wu, Changyong Niu
Abstract:
Attributed graph clustering can utilize the graph topology and node attributes to uncover hidden community structures and patterns in complex networks, aiding in the understanding and analysis of complex systems. Utilizing contrastive learning for attributed graph clustering can effectively exploit meaningful implicit relationships between data. However, existing attributed graph clustering methods based on contrastive learning suffer from the following drawbacks: 1) Complex data augmentation increases computational cost, and inappropriate data augmentation may lead to semantic drift. 2) The selection of positive and negative samples neglects the intrinsic cluster structure learned from graph topology and node attributes. Therefore, this paper proposes a method called self-supervised Attributed Graph Clustering with Dual Contrastive Loss constraints (AGC-DCL). Firstly, Siamese Multilayer Perceptron (MLP) encoders are employed to generate two views separately to avoid complex data augmentation. Secondly, the neighborhood contrastive loss is introduced to constrain node representation using local topological structure while effectively embedding attribute information through attribute reconstruction. Additionally, clustering-oriented contrastive loss is applied to fully utilize clustering information in global semantics for discriminative node representations, regarding the cluster centers from two views as negative samples to fully leverage effective clustering information from different views. Comparative clustering results with existing attributed graph clustering algorithms on six datasets demonstrate the superiority of the proposed method.Keywords: attributed graph clustering, contrastive learning, clustering-oriented, self-supervised learning
Procedia PDF Downloads 5326460 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure
Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik
Abstract:
Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT
Procedia PDF Downloads 12826459 Numerical Simulation of Transient 3D Temperature and Kerf Formation in Laser Fusion Cutting
Authors: Karim Kheloufi, El Hachemi Amara
Abstract:
In the present study, a three-dimensional transient numerical model was developed to study the temperature field and cutting kerf shape during laser fusion cutting. The finite volume model has been constructed, based on the Navier–Stokes equations and energy conservation equation for the description of momentum and heat transport phenomena, and the Volume of Fluid (VOF) method for free surface tracking. The Fresnel absorption model is used to handle the absorption of the incident wave by the surface of the liquid metal and the enthalpy-porosity technique is employed to account for the latent heat during melting and solidification of the material. To model the physical phenomena occurring at the liquid film/gas interface, including momentum/heat transfer, a new approach is proposed which consists of treating friction force, pressure force applied by the gas jet and the heat absorbed by the cutting front surface as source terms incorporated into the governing equations. All these physics are coupled and solved simultaneously in Fluent CFD®. The main objective of using a transient phase change model in the current case is to simulate the dynamics and geometry of a growing laser-cutting generated kerf until it becomes fully developed. The model is used to investigate the effect of some process parameters on temperature fields and the formed kerf geometry.Keywords: laser cutting, numerical simulation, heat transfer, fluid flow
Procedia PDF Downloads 33926458 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores
Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi
Abstract:
In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.Keywords: drug synergy, clustering, prediction, machine learning., deep learning
Procedia PDF Downloads 7926457 A Weighted K-Medoids Clustering Algorithm for Effective Stability in Vehicular Ad Hoc Networks
Authors: Rejab Hajlaoui, Tarek Moulahi, Hervé Guyennet
Abstract:
In a highway scenario, the vehicle speed can exceed 120 kmph. Therefore, any vehicle can enter or leave the network within a very short time. This mobility adversely affects the network connectivity and decreases the life time of all established links. To ensure an effective stability in vehicular ad hoc networks with minimum broadcasting storm, we have developed a weighted algorithm based on the k-medoids clustering algorithm (WKCA). Indeed, the number of clusters and the initial cluster heads will not be selected randomly as usual, but considering the available transmission range and the environment size. Then, to ensure optimal assignment of nodes to clusters in both k-medoids phases, the combined weight of any node will be computed according to additional metrics including direction, relative speed and proximity. Empirical results prove that in addition to the convergence speed that characterizes the k-medoids algorithm, our proposed model performs well both AODV-Clustering and OLSR-Clustering protocols under different densities and velocities in term of end-to-end delay, packet delivery ratio, and throughput.Keywords: communication, clustering algorithm, k-medoids, sensor, vehicular ad hoc network
Procedia PDF Downloads 23826456 Multisignature Schemes for Reinforcing Trust in Cloud Software-As-A-Service Services
Authors: Mustapha Hedabou, Ali Azougaghe, Ahmed Bentajer, Hicham Boukhris, Mourad Eddiwani, Zakaria Igarramen
Abstract:
Software-as-a-service (SaaS) is emerging as a dominant approach to delivering software. It encompasses a range of business, technical opportunities, issue, and challenges. Trustiness in the cloud services regarding the security and the privacy of the delivered data is the most critical issue with the SaaS model. In this paper, we survey the security concerns related to the SaaS model, and we propose the design of a trusted SaaS model that gives users more confidence into SaaS services by leveraging a trust in a neutral source code certifying authority. The proposed design is based on the use of the multisignature mechanism for signing the source code of the application service. In our model, the cloud provider acts as a root of trust by ensuring the integrity of the application service when it was running on its platform. The proposed design prevents insider attacks from tampering with application service before and after it was launched in a cloud provider platform.Keywords: cloud computing, SaaS Platform, TPM, trustiness, code source certification, multi-signature schemes
Procedia PDF Downloads 27526455 Nonparametric Truncated Spline Regression Model on the Data of Human Development Index in Indonesia
Authors: Kornelius Ronald Demu, Dewi Retno Sari Saputro, Purnami Widyaningsih
Abstract:
Human Development Index (HDI) is a standard measurement for a country's human development. Several factors may have influenced it, such as life expectancy, gross domestic product (GDP) based on the province's annual expenditure, the number of poor people, and the percentage of an illiterate people. The scatter plot between HDI and the influenced factors show that the plot does not follow a specific pattern or form. Therefore, the HDI's data in Indonesia can be applied with a nonparametric regression model. The estimation of the regression curve in the nonparametric regression model is flexible because it follows the shape of the data pattern. One of the nonparametric regression's method is a truncated spline. Truncated spline regression is one of the nonparametric approach, which is a modification of the segmented polynomial functions. The estimator of a truncated spline regression model was affected by the selection of the optimal knots point. Knot points is a focus point of spline truncated functions. The optimal knots point was determined by the minimum value of generalized cross validation (GCV). In this article were applied the data of Human Development Index with a truncated spline nonparametric regression model. The results of this research were obtained the best-truncated spline regression model to the HDI's data in Indonesia with the combination of optimal knots point 5-5-5-4. Life expectancy and the percentage of an illiterate people were the significant factors depend to the HDI in Indonesia. The coefficient of determination is 94.54%. This means the regression model is good enough to applied on the data of HDI in Indonesia.Keywords: generalized cross validation (GCV), Human Development Index (HDI), knots point, nonparametric regression, truncated spline
Procedia PDF Downloads 33926454 Convolutional Neural Networks-Optimized Text Recognition with Binary Embeddings for Arabic Expiry Date Recognition
Authors: Mohamed Lotfy, Ghada Soliman
Abstract:
Recognizing Arabic dot-matrix digits is a challenging problem due to the unique characteristics of dot-matrix fonts, such as irregular dot spacing and varying dot sizes. This paper presents an approach for recognizing Arabic digits printed in dot matrix format. The proposed model is based on Convolutional Neural Networks (CNN) that take the dot matrix as input and generate embeddings that are rounded to generate binary representations of the digits. The binary embeddings are then used to perform Optical Character Recognition (OCR) on the digit images. To overcome the challenge of the limited availability of dotted Arabic expiration date images, we developed a True Type Font (TTF) for generating synthetic images of Arabic dot-matrix characters. The model was trained on a synthetic dataset of 3287 images and 658 synthetic images for testing, representing realistic expiration dates from 2019 to 2027 in the format of yyyy/mm/dd. Our model achieved an accuracy of 98.94% on the expiry date recognition with Arabic dot matrix format using fewer parameters and less computational resources than traditional CNN-based models. By investigating and presenting our findings comprehensively, we aim to contribute substantially to the field of OCR and pave the way for advancements in Arabic dot-matrix character recognition. Our proposed approach is not limited to Arabic dot matrix digit recognition but can also be extended to text recognition tasks, such as text classification and sentiment analysis.Keywords: computer vision, pattern recognition, optical character recognition, deep learning
Procedia PDF Downloads 9326453 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 11426452 Elastic and Plastic Collision Comparison Using Finite Element Method
Authors: Gustavo Rodrigues, Hans Weber, Larissa Driemeier
Abstract:
The prevision of post-impact conditions and the behavior of the bodies during the impact have been object of several collision models. The formulation from Hertz’s theory is generally used dated from the 19th century. These models consider the repulsive force as proportional to the deformation of the bodies under contact and may consider it proportional to the rate of deformation. The objective of the present work is to analyze the behavior of the bodies during impact using the Finite Element Method (FEM) with elastic and plastic material models. The main parameters to evaluate are, the contact force, the time of contact and the deformation of the bodies. An advantage of using the FEM approach is the possibility to apply a plastic deformation to the model according to the material definition: there will be used Johnson–Cook plasticity model whose parameters are obtained through empirical tests of real materials. This model allows analyzing the permanent deformation caused by impact, phenomenon observed in real world depending on the forces applied to the body. These results are compared between them and with the model-based Hertz theory.Keywords: collision, impact models, finite element method, Hertz Theory
Procedia PDF Downloads 174