Search results for: geometric and topological data models
28213 Operating System Based Virtualization Models in Cloud Computing
Authors: Dev Ras Pandey, Bharat Mishra, S. K. Tripathi
Abstract:
Cloud computing is ready to transform the structure of businesses and learning through supplying the real-time applications and provide an immediate help for small to medium sized businesses. The ability to run a hypervisor inside a virtual machine is important feature of virtualization and it is called nested virtualization. In today’s growing field of information technology, many of the virtualization models are available, that provide a convenient approach to implement, but decision for a single model selection is difficult. This paper explains the applications of operating system based virtualization in cloud computing with an appropriate/suitable model with their different specifications and user’s requirements. In the present paper, most popular models are selected, and the selection was based on container and hypervisor based virtualization. Selected models were compared with a wide range of user’s requirements as number of CPUs, memory size, nested virtualization supports, live migration and commercial supports, etc. and we identified a most suitable model of virtualization.Keywords: virtualization, OS based virtualization, container based virtualization, hypervisor based virtualization
Procedia PDF Downloads 32628212 Analysis of Big Data
Authors: Sandeep Sharma, Sarabjit Singh
Abstract:
As per the user demand and growth trends of large free data the storage solutions are now becoming more challenge-able to protect, store and to retrieve data. The days are not so far when the storage companies and organizations are start saying 'no' to store our valuable data or they will start charging a huge amount for its storage and protection. On the other hand as per the environmental conditions it becomes challenge-able to maintain and establish new data warehouses and data centers to protect global warming threats. A challenge of small data is over now, the challenges are big that how to manage the exponential growth of data. In this paper we have analyzed the growth trend of big data and its future implications. We have also focused on the impact of the unstructured data on various concerns and we have also suggested some possible remedies to streamline big data.Keywords: big data, unstructured data, volume, variety, velocity
Procedia PDF Downloads 54528211 Modern Imputation Technique for Missing Data in Linear Functional Relationship Model
Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, Rahmatullah Imon
Abstract:
Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in the LFRM. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators
Procedia PDF Downloads 39828210 Quantitative Seismic Interpretation in the LP3D Concession, Central of the Sirte Basin, Libya
Authors: Tawfig Alghbaili
Abstract:
LP3D Field is located near the center of the Sirt Basin in the Marada Trough approximately 215 km south Marsa Al Braga City. The Marada Trough is bounded on the west by a major fault, which forms the edge of the Beda Platform, while on the east, a bounding fault marks the edge of the Zelten Platform. The main reservoir in the LP3D Field is Upper Paleocene Beda Formation. The Beda Formation is mainly limestone interbedded with shale. The reservoir average thickness is 117.5 feet. To develop a better understanding of the characterization and distribution of the Beda reservoir, quantitative seismic data interpretation has been done, and also, well logs data were analyzed. Six reflectors corresponding to the tops of the Beda, Hagfa Shale, Gir, Kheir Shale, Khalifa Shale, and Zelten Formations were picked and mapped. Special work was done on fault interpretation part because of the complexities of the faults at the structure area. Different attribute analyses were done to build up more understanding of structures lateral extension and to view a clear image of the fault blocks. Time to depth conversion was computed using velocity modeling generated from check shot and sonic data. The simplified stratigraphic cross-section was drawn through the wells A1, A2, A3, and A4-LP3D. The distribution and the thickness variations of the Beda reservoir along the study area had been demonstrating. Petrophysical analysis of wireline logging also was done and Cross plots of some petrophysical parameters are generated to evaluate the lithology of reservoir interval. Structure and Stratigraphic Framework was designed and run to generate different model like faults, facies, and petrophysical models and calculate the reservoir volumetric. This study concluded that the depth structure map of the Beda formation shows the main structure in the area of study, which is north to south faulted anticline. Based on the Beda reservoir models, volumetric for the base case has been calculated and it has STOIIP of 41MMSTB and Recoverable oil of 10MMSTB. Seismic attributes confirm the structure trend and build a better understanding of the fault system in the area.Keywords: LP3D Field, Beda Formation, reservoir models, Seismic attributes
Procedia PDF Downloads 21128209 The Development of Packaging to Create Additional Value for Organic Rice Products of Uttaradit Province, Thailand
Authors: Juntima Pokkrong
Abstract:
The objectives of the study were to develop packaging made from rice straws left after the harvest in order to create additional value for organic rice products of Uttaradit Province and to demonstrate the technology of producing straw packaging to the community. The population was promoters of organic rice distributors, governmental organizations, consumers, and three groups of organic rice producers which are the Agriculturist Group of Khorrum Sub-district, Pichai District, Uttaradit Province; the Agriculturist Group of Wangdin Sub-district, Muang District, Uttaradit Province; and the Agriculturist Group of Wangkapi Sub-district, Muang District, Uttaradit Province. The data were collected via group discussions, and two types of questionnaires. The data acquired were then analyzed using descriptive statistic for percentage, mean, standard deviation, and content analysis. It has been found that primary packaging for one kilogram of rice requires vacuumed plastic bags made from thermoplastic or resin because they are able to preserve the quality of rice for a long time, and they are also very cheap. For secondary packaging, the making of straw paper was studied and applied. Straw paper can be used for various purposes, and in this study, it was used to create the secondary packaging models in compliance with packaging preferences acquired from the questionnaires. The models were surveyed among the population for their opinion using satisfaction questionnaires, and the result was overall highly satisfactory.Keywords: environmentally friendly, organic rice, packaging, straw paper
Procedia PDF Downloads 24328208 Current of Drain for Various Values of Mobility in the Gaas Mesfet
Authors: S. Belhour, A. K. Ferouani, C. Azizi
Abstract:
In recent years, a considerable effort (experience, numerical simulation, and theoretical prediction models) has characterised by high efficiency and low cost. Then an improved physics analytical model for simulating is proposed. The performance of GaAs MESFETs has been developed for use in device design for high frequency. This model is based on mathematical analysis, and a new approach for the standard model is proposed, this approach allowed to conceive applicable model for MESFET’s operating in the turn-one or pinch-off region and valid for the short-channel and the long channel MESFET’s in which the two dimensional potential distribution contributed by the depletion layer under the gate is obtained by conventional approximation. More ever, comparisons between the analytical models with different values of mobility are proposed, and a good agreement is obtained.Keywords: analytical, gallium arsenide, MESFET, mobility, models
Procedia PDF Downloads 7328207 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 13628206 Message Passing Neural Network (MPNN) Approach to Multiphase Diffusion in Reservoirs for Well Interconnection Assessments
Authors: Margarita Mayoral-Villa, J. Klapp, L. Di G. Sigalotti, J. E. V. Guzmán
Abstract:
Automated learning techniques are widely applied in the energy sector to address challenging problems from a practical point of view. To this end, we discuss the implementation of a Message Passing algorithm (MPNN)within a Graph Neural Network(GNN)to leverage the neighborhood of a set of nodes during the aggregation process. This approach enables the characterization of multiphase diffusion processes in the reservoir, such that the flow paths underlying the interconnections between multiple wells may be inferred from previously available data on flow rates and bottomhole pressures. The results thus obtained compare favorably with the predictions produced by the Reduced Order Capacitance-Resistance Models (CRM) and suggest the potential of MPNNs to enhance the robustness of the forecasts while improving the computational efficiency.Keywords: multiphase diffusion, message passing neural network, well interconnection, interwell connectivity, graph neural network, capacitance-resistance models
Procedia PDF Downloads 14728205 Efficient Deep Neural Networks for Real-Time Strawberry Freshness Monitoring: A Transfer Learning Approach
Authors: Mst. Tuhin Akter, Sharun Akter Khushbu, S. M. Shaqib
Abstract:
A real-time system architecture is highly effective for monitoring and detecting various damaged products or fruits that may deteriorate over time or become infected with diseases. Deep learning models have proven to be effective in building such architectures. However, building a deep learning model from scratch is a time-consuming and costly process. A more efficient solution is to utilize deep neural network (DNN) based transfer learning models in the real-time monitoring architecture. This study focuses on using a novel strawberry dataset to develop effective transfer learning models for the proposed real-time monitoring system architecture, specifically for evaluating and detecting strawberry freshness. Several state-of-the-art transfer learning models were employed, and the best performing model was found to be Xception, demonstrating higher performance across evaluation metrics such as accuracy, recall, precision, and F1-score.Keywords: strawberry freshness evaluation, deep neural network, transfer learning, image augmentation
Procedia PDF Downloads 8928204 Statistically Accurate Synthetic Data Generation for Enhanced Traffic Predictive Modeling Using Generative Adversarial Networks and Long Short-Term Memory
Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad
Abstract:
Effective traffic management and infrastructure planning are crucial for the development of smart cities and intelligent transportation systems. This study addresses the challenge of data scarcity by generating realistic synthetic traffic data using the PeMS-Bay dataset, improving the accuracy and reliability of predictive modeling. Advanced synthetic data generation techniques, including TimeGAN, GaussianCopula, and PAR Synthesizer, are employed to produce synthetic data that replicates the statistical and structural characteristics of real-world traffic. Future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is planned to capture both spatial and temporal correlations, further improving data quality and realism. The performance of each synthetic data generation model is evaluated against real-world data to identify the best models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are utilized to model and predict complex temporal dependencies within traffic patterns. This comprehensive approach aims to pinpoint areas with low vehicle counts, uncover underlying traffic issues, and inform targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study supports data-driven decision-making that enhances urban mobility, safety, and the overall efficiency of city planning initiatives.Keywords: GAN, long short-term memory, synthetic data generation, traffic management
Procedia PDF Downloads 2328203 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics
Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee
Abstract:
Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru
Procedia PDF Downloads 8628202 Ontology Expansion via Synthetic Dataset Generation and Transformer-Based Concept Extraction
Authors: Andrey Khalov
Abstract:
The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.Keywords: ontology expansion, synthetic dataset, transformer fine-tuning, concept extraction, DOLCE, BERT, taxonomy, LLM, NER
Procedia PDF Downloads 1228201 The Effectiveness of Multiphase Flow in Well- Control Operations
Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia
Abstract:
Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic
Procedia PDF Downloads 11728200 Maximum Power and Bone Variables in Young Adult Men
Authors: Anthony Khawaja, Jacques Prioux, Ghassan Maalouf, Rawad El Hage
Abstract:
The regular practice of physical activities characterized by significant mechanical stresses stimulates bone formation and improves bone mineral density (BMD) in the most solicited sites. The purpose of this study was to explore the relationships between maximum power and bone variables in a group of young adult men. Identification of new determinants of BMD, bone mineral content (BMC) and hip geometric indices in young adult men, would allow screening and early management of future cases of osteopenia and osteoporosis. Fifty-three young adult men (18 – 35yr) voluntarily participated in this study. Weight and height were measured, and body mass index was calculated. Body composition, BMC and BMD were determined for each individual by Dual-energy X-ray absorptiometry (DXA; GE Healthcare, Madison, WI) at whole body (WB), lumbar spine (L1-L4), total hip (TH), and femoral neck (FN). FN cross-sectional area (CSA), strength index (SI), buckling ratio (BR), FN section modulus (Z), cross-sectional moment of inertia (CSMI) and L1-L4 TBS were also evaluated by DXA. The vertical jump was evaluated using a field test (sargent test). Two main parameters were retained: vertical jump performance (cm) and power (w). The subjects performed three jumps with 2 minutes of recovery between jumps. The highest vertical jump was selected. Maximum power (P max, in watts) was calculated. Maximum power was positively correlated to WB BMD (r = 0.41; p < 0.01), WB BMC (r = 0.65; p < 0.001), L1-L4 BMC (r = 0.54; p < 0.001), FN BMC (r = 0.35; p < 0.01), TH BMC (r = 0.50; p < 0.001), CSMI (r = 0.50; p < 0.001), CSA (r = 0.33; p < 0.05). Vertical jump was positively correlated to WB BMC (r = 0.31; p < 0.05), L1-L4 BMC (r = 0.40; p < 0.01), CSMI (r = 0.29; p < 0.05). The current study suggests that maximum power is a positive determinant of BMD, BMC and hip geometric indices in young adult men. In addition, it shows also that maximum power is a stronger positive determinant of bone variables than vertical jump in this population. Implementing strategies to increase maximum power in young adult men may be useful for preventing osteoporotic fractures later in life.Keywords: bone variables, maximum power, osteopenia, osteoporosis, vertical jump, young adult men
Procedia PDF Downloads 17728199 Assessment of Pre-Processing Influence on Near-Infrared Spectra for Predicting the Mechanical Properties of Wood
Authors: Aasheesh Raturi, Vimal Kothiyal, P. D. Semalty
Abstract:
We studied mechanical properties of Eucalyptus tereticornis using FT-NIR spectroscopy. Firstly, spectra were pre-processed to eliminate useless information. Then, prediction model was constructed by partial least squares regression. To study the influence of pre-processing on prediction of mechanical properties for NIR analysis of wood samples, we applied various pretreatment methods like straight line subtraction, constant offset elimination, vector-normalization, min-max normalization, multiple scattering. Correction, first derivative, second derivatives and their combination with other treatment such as First derivative + straight line subtraction, First derivative+ vector normalization and First derivative+ multiplicative scattering correction. The data processing methods in combination of preprocessing with different NIR regions, RMSECV, RMSEP and optimum factors/rank were obtained by optimization process of model development. More than 350 combinations were obtained during optimization process. More than one pre-processing method gave good calibration/cross-validation and prediction/test models, but only the best calibration/cross-validation and prediction/test models are reported here. The results show that one can safely use NIR region between 4000 to 7500 cm-1 with straight line subtraction, constant offset elimination, first derivative and second derivative preprocessing method which were found to be most appropriate for models development.Keywords: FT-NIR, mechanical properties, pre-processing, PLS
Procedia PDF Downloads 35728198 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning
Procedia PDF Downloads 15228197 Gene Names Identity Recognition Using Siamese Network for Biomedical Publications
Authors: Micheal Olaolu Arowolo, Muhammad Azam, Fei He, Mihail Popescu, Dong Xu
Abstract:
As the quantity of biological articles rises, so does the number of biological route figures. Each route figure shows gene names and relationships. Annotating pathway diagrams manually is time-consuming. Advanced image understanding models could speed up curation, but they must be more precise. There is rich information in biological pathway figures. The first step to performing image understanding of these figures is to recognize gene names automatically. Classical optical character recognition methods have been employed for gene name recognition, but they are not optimized for literature mining data. This study devised a method to recognize an image bounding box of gene name as a photo using deep Siamese neural network models to outperform the existing methods using ResNet, DenseNet and Inception architectures, the results obtained about 84% accuracy.Keywords: biological pathway, gene identification, object detection, Siamese network
Procedia PDF Downloads 28928196 Horizontal Cooperative Game Theory in Hotel Revenue Management
Authors: Ririh Rahma Ratinghayu, Jayu Pramudya, Nur Aini Masruroh, Shi-Woei Lin
Abstract:
This research studies pricing strategy in cooperative setting of hotel duopoly selling perishable product under fixed capacity constraint by using the perspective of managers. In hotel revenue management, competitor’s average room rate and occupancy rate should be taken into manager’s consideration in determining pricing strategy to generate optimum revenue. This information is not provided by business intelligence or available in competitor’s website. Thus, Information Sharing (IS) among players might result in improved performance of pricing strategy. IS is widely adopted in the logistics industry, but IS within hospitality industry has not been well-studied. This research put IS as one of cooperative game schemes, besides Mutual Price Setting (MPS) scheme. In off-peak season, hotel manager arranges pricing strategy to offer promotion package and various kinds of discounts up to 60% of full-price to attract customers. Competitor selling homogenous product will react the same, then triggers a price war. Price war which generates lower revenue may be avoided by creating collaboration in pricing strategy to optimize payoff for both players. In MPS cooperative game, players collaborate to set a room rate applied for both players. Cooperative game may avoid unfavorable players’ payoff caused by price war. Researches on horizontal cooperative game in logistics show better performance and payoff for the players, however, horizontal cooperative game in hotel revenue management has not been demonstrated. This paper aims to develop hotel revenue management models under duopoly cooperative schemes (IS & MPS), which are compared to models under non-cooperative scheme too. Each scheme has five models, Capacity Allocation Model; Demand Model; Revenue Model; Optimal Price Model; and Equilibrium Price Model. Capacity Allocation Model and Demand Model employs self-hotel and competitor’s full and discount price as predictors under non-linear relation. Optimal price is obtained by assuming revenue maximization motive. Equilibrium price is observed by interacting self-hotel’s and competitor’s optimal price under reaction equation. Equilibrium is analyzed using game theory approach. The sequence applies for three schemes. MPS Scheme differently aims to optimize total players’ payoff. The case study in which theoretical models are applied observes two hotels offering homogenous product in Indonesia during a year. The Capacity Allocation, Demand, and Revenue Models are built using multiple regression and statistically tested for validation. Case study data confirms that price behaves within demand model in a non-linear manner. IS Models can represent the actual demand and revenue data better than Non-IS Models. Furthermore, IS enables hotels to earn significantly higher revenue. Thus, duopoly hotel players in general, might have reasonable incentives to share information horizontally. During off-peak season, MPS Models are able to predict the optimal equal price for both hotels. However, Nash equilibrium may not always exist depending on actual payoff of adhering or betraying mutual agreement. To optimize performance, horizontal cooperative game may be chosen over non-cooperative game. Mathematical models can be used to detect collusion among business players. Empirical testing can be used as policy input for market regulator in preventing unethical business practices potentially harming society welfare.Keywords: horizontal cooperative game theory, hotel revenue management, information sharing, mutual price setting
Procedia PDF Downloads 28828195 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School
Authors: Martín Pratto Burgos
Abstract:
The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.Keywords: machine-learning, engineering, university, education, computational models
Procedia PDF Downloads 9328194 AutoML: Comprehensive Review and Application to Engineering Datasets
Authors: Parsa Mahdavi, M. Amin Hariri-Ardebili
Abstract:
The development of accurate machine learning and deep learning models traditionally demands hands-on expertise and a solid background to fine-tune hyperparameters. With the continuous expansion of datasets in various scientific and engineering domains, researchers increasingly turn to machine learning methods to unveil hidden insights that may elude classic regression techniques. This surge in adoption raises concerns about the adequacy of the resultant meta-models and, consequently, the interpretation of the findings. In response to these challenges, automated machine learning (AutoML) emerges as a promising solution, aiming to construct machine learning models with minimal intervention or guidance from human experts. AutoML encompasses crucial stages such as data preparation, feature engineering, hyperparameter optimization, and neural architecture search. This paper provides a comprehensive overview of the principles underpinning AutoML, surveying several widely-used AutoML platforms. Additionally, the paper offers a glimpse into the application of AutoML on various engineering datasets. By comparing these results with those obtained through classical machine learning methods, the paper quantifies the uncertainties inherent in the application of a single ML model versus the holistic approach provided by AutoML. These examples showcase the efficacy of AutoML in extracting meaningful patterns and insights, emphasizing its potential to revolutionize the way we approach and analyze complex datasets.Keywords: automated machine learning, uncertainty, engineering dataset, regression
Procedia PDF Downloads 6028193 The Effect of Three-Dimensional Morphology on Vulnerability Assessment of Atherosclerotic Plaque
Authors: M. Zareh, H. Mohammadi, B. Naser
Abstract:
Atherosclerotic plaque rupture is the main trigger of heart attack and brain stroke which are the leading cause of death in developed countries. Better understanding of rupture-prone plaque can help clinicians detect vulnerable plaques- rupture prone or instable plaques- and apply immediate medical treatment to prevent these life-threatening cardiovascular events. Therefore, there are plenty of studies addressing disclosure of vulnerable plaques properties. Necrotic core and fibrous tissue are two major tissues constituting atherosclerotic plaque; using histopathological and numerical approaches, many studies have demonstrated that plaque rupture is strongly associated with a large necrotic core and a thin fibrous cap, two morphological characteristic which can be acquired by two-dimensional imaging of atherosclerotic plaque present in coronary and carotid arteries. Plaque rupture is widely considered as a mechanical failure inside plaque tissue; this failure occurs when the stress within plaque excesses the strength of tissue material; hence, finite element method, a strong numerical approach, has been extensively applied to estimate stress distribution within plaques with different compositions which is then used for assessment of various vulnerability characteristics including plaque morphology, material properties and blood pressure. This study aims to evaluate significance of three-dimensional morphology on vulnerability degree of atherosclerotic plaque. To reach this end, different two-dimensional geometrical models of atherosclerotic plaques are considered based on available data and named Main 2D Models (M2M). Then, for each of these M2Ms, two three-dimensional idealistic models are created. These two 3D models represent two possible three-dimensional morphologies which might exist for a plaque with similar 2D morphology to one of M2Ms. Finite element method is employed to estimate stress, von-Mises stress, within each 3D models. Results indicate that for each M2Ms stress can significantly varies due to possible 3D morphological changes in that plaque. Also, our results show that an atherosclerotic plaque with thick cap may experience rupture if it has a critical 3D morphology. This study highlights the effect of 3D geometry of plaque on its instability degree and suggests that 3D morphology of plaque might be necessary to more effectively and accurately assess atherosclerotic plaque vulnerability.Keywords: atherosclerotic plaque, plaque rupture, finite element method, 3D model
Procedia PDF Downloads 30728192 Analyzing the Relationship between the Spatial Characteristics of Cultural Structure, Activities, and the Tourism Demand
Authors: Deniz Karagöz
Abstract:
This study is attempt to comprehend the relationship between the spatial characteristics of cultural structure, activities and the tourism demand in Turkey. The analysis divided into four parts. The first part consisted of a cultural structure and cultural activity (CSCA) index provided by principal component analysis. The analysis determined four distinct dimensions, namely, cultural activity/structure, accessing culture, consumption, and cultural management. The exploratory spatial data analysis employed to determine the spatial models of cultural structure and cultural activities in 81 provinces in Turkey. Global Moran I indices is used to ascertain the cultural activities and the structural clusters. Finally, the relationship between the cultural activities/cultural structure and tourism demand was analyzed. The raw/original data of the study official databases. The data on the cultural structure and activities gathered from the Turkish Statistical Institute and the data related to the tourism demand was provided by the Republic of Turkey Ministry of Culture and Tourism.Keywords: cultural activities, cultural structure, spatial characteristics, tourism demand, Turkey
Procedia PDF Downloads 55828191 Constructing the Density of States from the Parallel Wang Landau Algorithm Overlapping Data
Authors: Arman S. Kussainov, Altynbek K. Beisekov
Abstract:
This work focuses on building an efficient universal procedure to construct a single density of states from the multiple pieces of data provided by the parallel implementation of the Wang Landau Monte Carlo based algorithm. The Ising and Pott models were used as the examples of the two-dimensional spin lattices to construct their densities of states. Sampled energy space was distributed between the individual walkers with certain overlaps. This was made to include the latest development of the algorithm as the density of states replica exchange technique. Several factors of immediate importance for the seamless stitching process have being considered. These include but not limited to the speed and universality of the initial parallel algorithm implementation as well as the data post-processing to produce the expected smooth density of states.Keywords: density of states, Monte Carlo, parallel algorithm, Wang Landau algorithm
Procedia PDF Downloads 41028190 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System
Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes
Abstract:
The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models
Procedia PDF Downloads 8028189 Rethinking Urban Green Space Quality and Planning Models from Users and Experts’ Perspective for Sustainable Development: The Case of Debre Berhan and Debre Markos Cities, Ethiopia
Authors: Alemaw Kefale, Aramde Fetene, Hayal Desta
Abstract:
This study analyzed the users' and experts' views on the green space quality and planning models in Debre Berhan (DB) and Debre Markos (DM) cities in Ethiopia. A questionnaire survey was conducted on 350 park users (148 from DB and 202 from DM) to rate the accessibility, size, shape, vegetation cover, social and cultural context, conservation and heritage, community participation, attractiveness, comfort, safety, inclusiveness, and maintenance of green spaces using a Likert scale. A key informant interview was held with 13 experts in DB and 12 in DM. Descriptive statistics and tests of independence of variables using the chi-square test were done. A statistically significant association existed between the perception of green space quality attributes and users' occupation (χ² (160, N = 350) = 224.463, p < 0.001), age (χ² (128, N = 350) = 212.812, p < 0.001), gender (χ² (32, N = 350) = 68.443, p < 0.001), and education level (χ² (192, N = 350) = 293.396, p < 0.001). 61.7 % of park users were unsatisfied with the quality of urban green spaces. The users perceived dense vegetation cover as "good," with a mean value of 3.41, while the remaining were perceived as "medium with a mean value of 2.62 – 3.32". Only quantitative space standards are practiced as a green space planning model, while other models are unfamiliar and never used in either city. Therefore, experts need to be aware of and practice urban green models during urban planning to ensure that new developments include green spaces to accommodate the community's and the environment's needs.Keywords: urban green space, quality, users and experts, green space planning models, Ethiopia
Procedia PDF Downloads 5728188 Building Energy Modeling for Networks of Data Centers
Authors: Eric Kumar, Erica Cochran, Zhiang Zhang, Wei Liang, Ronak Mody
Abstract:
The objective of this article was to create a modelling framework that exposes the marginal costs of shifting workloads across geographically distributed data-centers. Geographical distribution of internet services helps to optimize their performance for localized end users with lowered communications times and increased availability. However, due to the geographical and temporal effects, the physical embodiments of a service's data center infrastructure can vary greatly. In this work, we first identify that the sources of variances in the physical infrastructure primarily stem from local weather conditions, specific user traffic profiles, energy sources, and the types of IT hardware available at the time of deployment. Second, we create a traffic simulator that indicates the IT load at each data-center in the set as an approximator for user traffic profiles. Third, we implement a framework that quantifies the global level energy demands using building energy models and the traffic profiles. The results of the model provide a time series of energy demands that can be used for further life cycle analysis of internet services.Keywords: data-centers, energy, life cycle, network simulation
Procedia PDF Downloads 14728187 Redundancy Component Matrix and Structural Robustness
Authors: Xinjian Kou, Linlin Li, Yongju Zhou, Jimian Song
Abstract:
We introduce the redundancy matrix that expresses clearly the geometrical/topological configuration of the structure. With the matrix, the redundancy of the structure is resolved into redundant components and assigned to each member or rigid joint. The values of the diagonal elements in the matrix indicates the importance of the corresponding members or rigid joints, and the geometrically correlations can be shown with the non-diagonal elements. If a member or rigid joint failures, reassignment of the redundant components can be calculated with the recursive method given in the paper. By combining the indexes of reliability and redundancy components, we define an index concerning the structural robustness. To further explain the properties of the redundancy matrix, we cited several examples of statically indeterminate structures, including two trusses and a rigid frame. With the examples, some simple results and the properties of the matrix are discussed. The examples also illustrate that the redundancy matrix and the relevant concepts are valuable in structural safety analysis.Keywords: Structural Robustness, Structural Reliability, Redundancy Component, Redundancy Matrix
Procedia PDF Downloads 27028186 Long-Term Modal Changes in International Traffic - Modelling Exercise
Authors: Tomasz Komornicki
Abstract:
The primary aim of the presentation is to try to model border traffic and, at the same time to explain on which economic variables the intensity of border traffic depended in the long term. For this purpose, long series of traffic data on the Polish borders were used. Models were estimated for three variants of explanatory variables: a) for total arrivals and departures (total movement of Poles and foreigners), b) for arrivals and departures of Poles, and c) for arrivals and departures of foreigners. Each of the defined explanatory variables in the models appeared as the logarithm of the natural number of persons. Data from 1994-2017 were used for modeling (for internal Schengen borders for the years 1994-2007). Information on the number of people arriving in and leaving Poland was collected for a total of 303 border crossings. On the basis of the analyses carried out, it was found that one of the main factors determining border traffic is generally differences in the level of economic development (GDP) and the condition of the economy (level of unemployment) and the degree of border permeability. Also statistically significant for border traffic are differences in the prices of goods (fuels, tobacco, and alcohol products) and services (mainly basic ones, e.g., hairdressing services). Such a relationship exists mainly on the eastern border (border traffic determined largely by differences in the prices of goods) and on the border with Germany (in the first analysed period, border traffic was determined mainly by the prices of goods, later - after Poland's accession to the EU and the Schengen area - also by the prices of services). The models also confirmed differences in the set of factors shaping the volume and structure of border traffic on the Polish borders resulting from general geopolitical conditions, with the year 2007 being an important caesura, after which the classical population mobility factors became visible. The results obtained were additionally related to changes in traffic that occurred as a result of the CPOVID-19 pandemic and as a result of the Russian aggression against Ukraine.Keywords: border, modal structure, transport, Ukraine
Procedia PDF Downloads 11428185 Elastic and Plastic Collision Comparison Using Finite Element Method
Authors: Gustavo Rodrigues, Hans Weber, Larissa Driemeier
Abstract:
The prevision of post-impact conditions and the behavior of the bodies during the impact have been object of several collision models. The formulation from Hertz’s theory is generally used dated from the 19th century. These models consider the repulsive force as proportional to the deformation of the bodies under contact and may consider it proportional to the rate of deformation. The objective of the present work is to analyze the behavior of the bodies during impact using the Finite Element Method (FEM) with elastic and plastic material models. The main parameters to evaluate are, the contact force, the time of contact and the deformation of the bodies. An advantage of using the FEM approach is the possibility to apply a plastic deformation to the model according to the material definition: there will be used Johnson–Cook plasticity model whose parameters are obtained through empirical tests of real materials. This model allows analyzing the permanent deformation caused by impact, phenomenon observed in real world depending on the forces applied to the body. These results are compared between them and with the model-based Hertz theory.Keywords: collision, impact models, finite element method, Hertz Theory
Procedia PDF Downloads 17328184 Internal Methane Dry Reforming Kinetic Models in Solid Oxide Fuel Cells
Authors: Saeed Moarrefi, Shou-Han Zhou, Liyuan Fan
Abstract:
Coupling with solid oxide fuel cells, methane dry reforming is a promising pathway for energy production while mitigating carbon emissions. However, the influence of carbon dioxide and electrochemical reactions on the internal dry reforming reaction within the fuel cells remains debatable, requiring accurate kinetic models to describe the internal reforming behaviors. We employed the Power-Law and Langmuir Hinshelwood–Hougen Watson models in an electrolyte-supported solid oxide fuel cell with a NiO-GDC-YSZ anode. The current density used in this study ranges from 0 to 1000 A/m2 at 973 K to 1173 K to estimate various kinetic parameters. The influence of the electrochemical reactions on the adsorption terms, the equilibrium of the reactions, the activation energy, the pre-exponential factor of the rate constant, and the adsorption equilibrium constant were studied. This study provides essential parameters for future simulations and highlights the need for a more detailed examination of reforming kinetic models.Keywords: dry reforming kinetics, Langmuir Hinshelwood–Hougen Watson, power-law, SOFC
Procedia PDF Downloads 19