Search results for: improvement of model accuracy and reliability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23618

Search results for: improvement of model accuracy and reliability

21938 Determination of Measurement Uncertainty of the Diagnostic Meteorological Model CALMET

Authors: Nina Miklavčič, Urška Kugovnik, Natalia Galkina, Primož Ribarič, Rudi Vončina

Abstract:

Today, the need for weather predictions is deeply rooted in the everyday life of people as well as it is in industry. The forecasts influence final decision-making processes in multiple areas, from agriculture and prevention of natural disasters to air traffic regulations and solutions on a national level for health, security, and economic problems. Namely, in Slovenia, alongside other existing forms of application, weather forecasts are adopted for the prognosis of electrical current transmission through powerlines. Meteorological parameters are one of the key factors which need to be considered in estimations of the reliable supply of electrical energy to consumers. And like for any other measured value, the knowledge about measurement uncertainty is also critical for the secure and reliable supply of energy. The estimation of measurement uncertainty grants us a more accurate interpretation of data, a better quality of the end results, and even a possibility of improvement of weather forecast models. In the article, we focused on the estimation of measurement uncertainty of the diagnostic microscale meteorological model CALMET. For the purposes of our research, we used a network of meteorological stations spread in the area of our interest, which enables a side-by-side comparison of measured meteorological values with the values calculated with the help of CALMET and the measurement uncertainty estimation as a final result.

Keywords: uncertancy, meteorological model, meteorological measurment, CALMET

Procedia PDF Downloads 81
21937 Design and Development of a Prototype Vehicle for Shell Eco-Marathon

Authors: S. S. Dol

Abstract:

Improvement in vehicle efficiency can reduce global fossil fuels consumptions. For that sole reason, Shell Global Corporation introduces Shell Eco-marathon where student teams require to design, build and test energy-efficient vehicles. Hence, this paper will focus on design processes and the development of a fuel economic vehicle which satisfying the requirements of the competition. In this project, three components are designed and analyzed, which are the body, chassis and powertrain of the vehicle. Optimum design for each component is produced through simulation analysis and theoretical calculation in which improvement is made as the project progresses.

Keywords: energy efficient, drag force, chassis, powertrain

Procedia PDF Downloads 335
21936 Psychological Testing in Industrial/Organizational Psychology: Validity and Reliability of Psychological Assessments in the Workplace

Authors: Melissa C. Monney

Abstract:

Psychological testing has been of interest to researchers for many years as useful tools in assessing and diagnosing various disorders as well as to assist in understanding human behavior. However, for over 20 years now, researchers and laypersons alike have been interested in using them for other purposes, such as determining factors in employee selection, promotion, and even termination. In recent years, psychological assessments have been useful in facilitating workplace decision processing, regarding employee circulation within organizations. This literature review explores four of the most commonly used psychological tests in workplace environments, namely cognitive ability, emotional intelligence, integrity, and personality tests, as organizations have used these tests to assess different factors of human behavior as predictive measures of future employee behaviors. The findings suggest that while there is much controversy and debate regarding the validity and reliability of these tests in workplace settings as they were not originally designed for these purposes, the use of such assessments in the workplace has been useful in decreasing costs and employee turnover as well as increase job satisfaction by ensuring the right employees are selected for their roles.

Keywords: cognitive ability, personality testing, predictive validity, workplace behavior

Procedia PDF Downloads 242
21935 Settlement Performance of Soft Clay Reinforced with Granular Columns

Authors: Muneerah Jeludin, V. Sivakumar

Abstract:

Numerous laboratory-based research studies on the behavior of ground improved with granular columns with respect to bearing capacity have been well-documented. However, information on its settlement performance is still scarce. Laboratory model study on the settlement behavior of soft clay reinforced with granular columns was conducted and results are presented. The investigation uses a soft kaolin clay sample of 300 mm in diameter and 400 mm in length. The clay samples were reinforced with single and multiple granular columns of various lengths using the displacement and replacement installation method. The results indicated that that no settlement reduction was achieved for a short single floating column. The settlement reduction factors reported for L/d ratios of 5.0, 7.5 and 10.0 are in the range of 1 to 2. The findings obtained in this research showed that the reduction factors are considerably less and that load-sharing mechanism between columns and surrounding clay is complex, particularly for column groups and is affected by other factors such as negative skin friction.

Keywords: ground improvement, model test, reinforced soil, settlement

Procedia PDF Downloads 466
21934 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning

Authors: Joseph George, Anne Kotteswara Roa

Abstract:

Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.

Keywords: skin cancer, deep learning, performance measures, accuracy, datasets

Procedia PDF Downloads 129
21933 A Nonlinear Visco-Hyper Elastic Constitutive Model for Modelling Behavior of Polyurea at Large Deformations

Authors: Shank Kulkarni, Alireza Tabarraei

Abstract:

The fantastic properties of polyurea such as flexibility, durability, and chemical resistance have brought it a wide range of application in various industries. Effective prediction of the response of polyurea under different loading and environmental conditions necessitates the development of an accurate constitutive model. Similar to most polymers, the behavior of polyurea depends on both strain and strain rate. Therefore, the constitutive model should be able to capture both these effects on the response of polyurea. To achieve this objective, in this paper, a nonlinear hyper-viscoelastic constitutive model is developed by the superposition of a hyperelastic and a viscoelastic model. The proposed constitutive model can capture the behavior of polyurea under compressive loading conditions at various strain rates. Four parameter Ogden model and Mooney Rivlin model are used to modeling the hyperelastic behavior of polyurea. The viscoelastic behavior is modeled using both a three-parameter standard linear solid (SLS) model and a K-BKZ model. Comparison of the modeling results with experiments shows that Odgen and SLS model can more accurately predict the behavior of polyurea. The material parameters of the model are found by curve fitting of the proposed model to the uniaxial compression test data. The proposed model can closely reproduce the stress-strain behavior of polyurea for strain rates up to 6500 /s.

Keywords: constitutive modelling, ogden model, polyurea, SLS model, uniaxial compression test

Procedia PDF Downloads 244
21932 A Lightweight Pretrained Encrypted Traffic Classification Method with Squeeze-and-Excitation Block and Sharpness-Aware Optimization

Authors: Zhiyan Meng, Dan Liu, Jintao Meng

Abstract:

Dependable encrypted traffic classification is crucial for improving cybersecurity and handling the growing amount of data. Large language models have shown that learning from large datasets can be effective, making pre-trained methods for encrypted traffic classification popular. However, attention-based pre-trained methods face two main issues: their large neural parameters are not suitable for low-computation environments like mobile devices and real-time applications, and they often overfit by getting stuck in local minima. To address these issues, we developed a lightweight transformer model, which reduces the computational parameters through lightweight vocabulary construction and Squeeze-and-Excitation Block. We use sharpness-aware optimization to avoid local minima during pre-training and capture temporal features with relative positional embeddings. Our approach keeps the model's classification accuracy high for downstream tasks. We conducted experiments on four datasets -USTC-TFC2016, VPN 2016, Tor 2016, and CICIOT 2022. Even with fewer than 18 million parameters, our method achieves classification results similar to methods with ten times as many parameters.

Keywords: sharpness-aware optimization, encrypted traffic classification, squeeze-and-excitation block, pretrained model

Procedia PDF Downloads 30
21931 A Research on the Improvement of Small and Medium-Sized City in Early-Modern China (1895-1927): Taking Southern Jiangsu as an Example

Authors: Xiaoqiang Fu, Baihao Li

Abstract:

In 1895, the failure of Sino-Japanese prompted the trend of comprehensive and systematic study of western pattern in China. In urban planning and construction, urban reform movement sprang up slowly, which aimed at renovating and reconstructing the traditional cities into modern cities similar to the concessions. During the movement, Chinese traditional city initiated a process of modern urban planning for its modernization. Meanwhile, the traditional planning morphology and system started to disintegrate, on the contrary, western form and technology had become the paradigm. Therefore, the improvement of existing cities had become the prototype of urban planning of early modern China. Currently, researches of the movement mainly concentrate on large cities, concessions, railway hub cities and some special cities resembling those. However, the systematic research about the large number of traditional small and medium-sized cities is still blank, up to now. This paper takes the improvement constructions of small and medium-sized cities in Southern region of Jiangsu Province as the research object. First of all, the criteria of small and medium-sized cities are based on the administrative levels of general office and cities at the county level. Secondly, the suitability of taking the Southern Jiangsu as the research object. The southern area of Jiangsu province called Southern Jiangsu for short, was the most economically developed region in Jiangsu, and also one of the most economically developed and the highest urbanization regions in China. As the most developed agricultural areas in ancient China, Southern Jiangsu formed a large number of traditional small and medium-sized cities. In early modern times, with the help of the Shanghai economic radiation, geographical advantage and powerful economic foundation, Southern Jiangsu became an important birthplace of Chinese national industry. Furthermore, the strong business atmosphere promoted the widespread urban improvement practices, which were incomparable of other regions. Meanwhile, the demonstration of Shanghai, Zhenjiang, Suzhou and other port cities became the improvement pattern of small and medium-sized city in Southern Jiangsu. This paper analyzes the reform movement of the small and medium-sized cities in Southern Jiangsu (1895-1927), including the subjects, objects, laws, technologies and the influence factors of politic and society, etc. At last, this paper reveals the formation mechanism and characteristics of urban improvement movement in early modern China. According to the paper, the improvement of small-medium city was a kind of gestation of the local city planning culture in early modern China,with a fusion of introduction and endophytism.

Keywords: early modern China, improvement of small-medium city, southern region of Jiangsu province, urban planning history of China

Procedia PDF Downloads 260
21930 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 114
21929 Fapitow: An Advanced AI Agent for Travel Agent Competition

Authors: Faiz Ul Haque Zeya

Abstract:

In this paper, Fapitow’s bidding strategy and approach to participate in Travel Agent Competition (TAC) is described. Previously, Fapitow is designed using the agents provided by the TAC Team and mainly used their modification for developing our strategy. But later, by observing the behavior of the agent, it is decided to come up with strategies that will be the main cause of improved utilities of the agent, and by theoretical examination, it is evident that the strategies will provide a significant improvement in performance which is later proved by agent’s performance in the games. The techniques and strategies for further possible improvement are also described. TAC provides a real-time, uncertain environment for learning, experimenting, and implementing various AI techniques. Some lessons learned about handling uncertain environments are also presented.

Keywords: agent, travel agent competition, bidding, TAC

Procedia PDF Downloads 108
21928 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely proved to be an effective method for moving objects detection in many computer vision applications. Over the past years, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are two of the most frequently occurring issues in the practical situation. This paper presents a new two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean values of RGB color channels. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block-wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the outputs of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate a very competitive performance compared to previous models.

Keywords: background subtraction, codebook model, local binary pattern, dynamic background, illumination change

Procedia PDF Downloads 217
21927 OmniDrive Model of a Holonomic Mobile Robot

Authors: Hussein Altartouri

Abstract:

In this paper the kinematic and kinetic models of an omnidirectional holonomic mobile robot is presented. The kinematic and kinetic models form the OmniDrive model. Therefore, a mathematical model for the robot equipped with three- omnidirectional wheels is derived. This model which takes into consideration the kinematics and kinetics of the robot, is developed to state space representation. Relative analysis of the velocities and displacements is used for the kinematics of the robot. Lagrange’s approach is considered in this study for deriving the equation of motion. The drive train and the mechanical assembly only of the Festo Robotino® is considered in this model. Mainly the model is developed for motion control. Furthermore, the model can be used for simulation purposes in different virtual environments not only Robotino® View. Further use of the model is in the mechatronics research fields with the aim of teaching and learning the advanced control theories.

Keywords: mobile robot, omni-direction wheel, mathematical model, holonomic mobile robot

Procedia PDF Downloads 609
21926 The Role of Islamic Microfinance Banks in Promoting the Social Welfare: A Case study of Yobe Microfinance Bank

Authors: Sheriff Muhammad Ibrahim, Tijjani Muhammad

Abstract:

The study assesses the Islamic Microfinance Bank's role in promoting customers' social welfare, using the newly developed products of Yobe Microfinance Bank to encourage inclusion and alleviate poverty in the Yobe communities. Yobe state is ranked bottom as the poorest in the region and scores low on human development and poverty alleviation. It is clearly indicated low education rates, poor implementation of government policies on poverty, and a high rate of financial exclusion. The study adopted a qualitative approach using random sampling to collect data from customers of Yobe Microfinance Bank. Using the acceptability of the newly introduced sharia complaint products of Yobe Microfinance among the people in Yobe state, using the Structural Equation Modelling, a total of 300 respondents completed the survey using a Likert scale. The study employed Structural Equation Modeling to analyze and test reliability and validity to provide accuracy of respondents' information. The finding indicates the positive relationship between Islamic banking products and customer satisfaction. The study concludes that introducing and consistently managing Islamic products can improve social welfare and reduce poverty through financial inclusion in the state.

Keywords: islamic microfinance, social welfare, products, poverty

Procedia PDF Downloads 128
21925 The Attitude and Willingness to Use Telecare for Arthritis Patients

Authors: Jui-Chen Huang

Abstract:

Nowadays, the population is aging, the number of people who need to be taken care of is increased, but the manpower and funding are insufficient. Therefore, this study aims to explore the attitudes and willingness of arthritis patients to adopt telecare and to take a large medical institution in the central area of Taiwan as a sample hospital. A structured questionnaire (using the Likert five-point scale) was used to collect chronic patients over 20 years old as sample data, and a total of 500 valid questionnaires were effectively collected. The SPSS 18.0 statistical software was used for reliability analysis and independent sample t-test to explore the differences in attitudes and willingness to use telecare for arthritis patients and non-arthritic patients. The Cronbach's alpha value of this study questionnaire was above 0.94, showing good reliability. Arthritis patients and non-arthritic patients had statistically significant differences in attitudes toward telecare, while the willingness to use did not reach statistically significant differences. In addition, the average attitude and intention of arthritis patients for telecare are 3.38 and 3.41, respectively, indicating that arthritis patients have a certain degree of attitude and willingness to adopt telecare, which is worthy of follow-up research and practical industry push.

Keywords: telecare, arthritis patients, attitudes, intention

Procedia PDF Downloads 142
21924 A Simulation-Optimization Approach to Control Production, Subcontracting and Maintenance Decisions for a Deteriorating Production System

Authors: Héctor Rivera-Gómez, Eva Selene Hernández-Gress, Oscar Montaño-Arango, Jose Ramon Corona-Armenta

Abstract:

This research studies the joint production, maintenance and subcontracting control policy for an unreliable deteriorating manufacturing system. Production activities are controlled by a derivation of the Hedging Point Policy, and given that the system is subject to deterioration, it reduces progressively its capacity to satisfy product demand. Multiple deterioration effects are considered, reflected mainly in the quality of the parts produced and the reliability of the machine. Subcontracting is available as support to satisfy product demand; also overhaul maintenance can be conducted to reduce the effects of deterioration. The main objective of the research is to determine simultaneously the production, maintenance and subcontracting rate which minimize the total incurred cost. A stochastic dynamic programming model is developed and solved through a simulation-based approach composed of statistical analysis and optimization with the response surface methodology. The obtained results highlight the strong interactions between production, deterioration and quality which justify the development of an integrated model. A numerical example and a sensitivity analysis are presented to validate our results.

Keywords: subcontracting, optimal control, deterioration, simulation, production planning

Procedia PDF Downloads 580
21923 A Constitutive Model for Time-Dependent Behavior of Clay

Authors: T. N. Mac, B. Shahbodaghkhan, N. Khalili

Abstract:

A new elastic-viscoplastic (EVP) constitutive model is proposed for the analysis of time-dependent behavior of clay. The proposed model is based on the bounding surface plasticity and the concept of viscoplastic consistency framework to establish continuous transition from plasticity to rate dependent viscoplasticity. Unlike the overstress based models, this model will meet the consistency condition in formulating the constitutive equation for EVP model. The procedure of deriving the constitutive relationship is also presented. Simulation results and comparisons with experimental data are then presented to demonstrate the performance of the model.

Keywords: bounding surface, consistency theory, constitutive model, viscosity

Procedia PDF Downloads 492
21922 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 142
21921 Investigation of the Turbulent Cavitating Flows from the Viewpoint of the Lift Coefficient

Authors: Ping-Ben Liu, Chien-Chou Tseng

Abstract:

The objective of this study is to investigate the relationship between the lift coefficient and dynamic behaviors of cavitating flow around a two-dimensional Clark Y hydrofoil at 8° angle of attack, cavitation number of 0.8, and Reynolds number of 7.10⁵. The flow field is investigated numerically by using a vapor transfer equation and a modified turbulence model which applies the filter and local density correction. The results including time-averaged lift/drag coefficient and shedding frequency agree well with experimental observations, which confirmed the reliability of this simulation. According to the variation of lift coefficient, the cycle which consists of growth and shedding of cavitation can be divided into three stages, and the lift coefficient at each stage behaves similarly due to the formation and shedding of the cavity around the trailing edge.

Keywords: Computational Fluid Dynamics, cavitation, turbulence, lift coefficient

Procedia PDF Downloads 350
21920 Development of a Scale for Evaluating the Efficacy of Vacationing

Authors: Ju Yeon Lee, Seol Ah Oh, Hong il Kim, Hae Yong Do, Sung Won Choi

Abstract:

The purpose of this study was to develop a Well-being and Moments Scale (WAMS) for evaluating the efficacy of ‘vacationing’ as a form of mental health recuperation. ‘Vacationing’ is defined as a going outside one’s usual environment to seek refreshment and relief from one’s daily life. To develop WAMS, we followed recommended procedures for scale development, including reviewing related studies, conducting focus group interviews to elucidate the need for this assessment area, and modifying items based on expert opinion. Through this process, we developed the WAMS. The psychometric properties of the WAMS were then tested in two separate samples. Exploratory factor analysis (EFA) was conducted using 1.41 participants (mean age = 30.45 years; range: 20-50 years) to identify the underlying 3-factor structure of 'Positive Emotions', 'Life Satisfaction' and 'Self-Confidence.' The 26 items retained based on the EFA procedures were associated with excellent reliability (i.e., α = 0.93). Confirmatory factor analysis was then conducted using 200 different participants (mean age = 29.51 years; range: 20-50 years) and revealed good model fit for our hypothesized 3-factor model. Convergent validity tests also revealed correlations with other scales in the expected direction and range. Study limitations as well as the importance and utility of WMAS are also discussed.

Keywords: vacationing, positive affect, life satisfaction, self-confidence, WAMS

Procedia PDF Downloads 340
21919 Availability Analysis of a Power Plant by Computer Simulation

Authors: Mehmet Savsar

Abstract:

Reliability and availability of power stations are extremely important in order to achieve a required level of power generation. In particular, in the hot desert climate of Kuwait, reliable power generation is extremely important because of cooling requirements at temperatures exceeding 50-centigrade degrees. In this paper, a particular power plant, named Sabiya Power Plant, which has 8 steam turbines and 13 gas turbine stations, has been studied in detail; extensive data are collected; and availability of station units are determined. Furthermore, a simulation model is developed and used to analyze the effects of different maintenance policies on availability of these stations. The results show that significant improvements can be achieved in power plant availabilities if appropriate maintenance policies are implemented.

Keywords: power plants, steam turbines, gas turbines, maintenance, availability, simulation

Procedia PDF Downloads 618
21918 Numerical Study of Wettability on the Triangular Micro-pillared Surfaces Using Lattice Boltzmann Method

Authors: Ganesh Meshram, Gloria Biswal

Abstract:

In this study, we present the numerical investigation of surface wettability on triangular micropillar surfaces by using a two-dimensional (2D) pseudo-potential multiphase lattice Boltzmann method with a D2Q9 model for various interaction parameters of the range varies from -1.40 to -2.50. Initially, simulation of the equilibrium state of a water droplet on a flat surface is considered for various interaction parameters to examine the accuracy of the present numerical model. We then imposed the microscale pillars on the bottom wall of the surface with different heights of the pillars to form the hydrophobic and superhydrophobic surfaces which enable the higher contact angle. The wettability of surfaces is simulated with water droplets of radius 100 lattice units in the domain of 800x800 lattice units. The present study shows that increasing the interaction parameter of the pillared hydrophobic surfaces dramatically reduces the contact area between water droplets and solid walls due to the momentum redirection phenomenon. Contact angles for different values of interaction strength have been validated qualitatively with the analytical results.

Keywords: contact angle, lattice boltzmann method, d2q9 model, pseudo-potential multiphase method, hydrophobic surfaces, wenzel state, cassie-baxter state, wettability

Procedia PDF Downloads 69
21917 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth in Patients with Lymph Nodes Metastases

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This paper is devoted to mathematical modelling of the progression and stages of breast cancer. We propose Consolidated mathematical growth model of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases (CoM-III) as a new research tool. We are interested in: 1) modelling the whole natural history of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; 2) developing adequate and precise CoM-III which reflects relations between primary tumor and secondary distant metastases; 3) analyzing the CoM-III scope of application; 4) implementing the model as a software tool. Firstly, the CoM-III includes exponential tumor growth model as a system of determinate nonlinear and linear equations. Secondly, mathematical model corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for secondary distant metastases growth in patients with lymph nodes metastases; 3) ‘visible period’ for secondary distant metastases growth in patients with lymph nodes metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-III model and predictive software: a) detect different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; b) make forecast of the period of the distant metastases appearance in patients with lymph nodes metastases; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoM-III: the number of doublings for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases. The CoM-III enables, for the first time, to predict the whole natural history of primary tumor and secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-III describes correctly primary tumor and secondary distant metastases growth of IA, IIA, IIB, IIIB (T1-4N1-3M0) stages in patients with lymph nodes metastases (N1-3); b) facilitates the understanding of the appearance period and inception of secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, primary tumor, secondary metastases, survival

Procedia PDF Downloads 302
21916 Stochastic Analysis of Linux Operating System through Copula Distribution

Authors: Vijay Vir Singh

Abstract:

This work is focused studying the Linux operating system connected in a LAN (local area network). The STAR topology (to be called subsystem-1) and BUS topology (to be called subsystem-2) are taken into account, which are placed at two different locations and connected to a server through a hub. In the both topologies BUS topology and STAR topology, we have assumed n clients. The system has two types of failures i.e. partial failure and complete failure. Further, the partial failure has been categorized as minor and major partial failure. It is assumed that the minor partial failure degrades the sub-systems and the major partial failure make the subsystem break down mode. The system may completely fail due to failure of server hacking and blocking etc. The system is studied using supplementary variable technique and Laplace transform by using different types of failure and two types of repair. The various measures of reliability for example, availability of system, reliability of system, MTTF, profit function for different parametric values have been discussed.

Keywords: star topology, bus topology, blocking, hacking, Linux operating system, Gumbel-Hougaard family copula, supplementary variable

Procedia PDF Downloads 370
21915 Hybrid CNN-SAR and Lee Filtering for Enhanced InSAR Phase Unwrapping and Coherence Optimization

Authors: Hadj Sahraoui Omar, Kebir Lahcen Wahib, Bennia Ahmed

Abstract:

Interferometric Synthetic Aperture Radar (InSAR) coherence is a crucial parameter for accurately monitoring ground deformation and environmental changes. However, coherence can be degraded by various factors such as temporal decorrelation, atmospheric disturbances, and geometric misalignments, limiting the reliability of InSAR measurements (Omar Hadj‐Sahraoui and al. 2019). To address this challenge, we propose an innovative hybrid approach that combines artificial intelligence (AI) with advanced filtering techniques to optimize interferometric coherence in InSAR data. Specifically, we introduce a Convolutional Neural Network (CNN) integrated with the Lee filter to enhance the performance of radar interferometry. This hybrid method leverages the strength of CNNs to automatically identify and mitigate the primary sources of decorrelation, while the Lee filter effectively reduces speckle noise, improving the overall quality of interferograms. We develop a deep learning-based model trained on multi-temporal and multi-frequency SAR datasets, enabling it to predict coherence patterns and enhance low-coherence regions. This hybrid CNN-SAR with Lee filtering significantly reduces noise and phase unwrapping errors, leading to more precise deformation maps. Experimental results demonstrate that our approach improves coherence by up to 30% compared to traditional filtering techniques, making it a robust solution for challenging scenarios such as urban environments, vegetated areas, and rapidly changing landscapes. Our method has potential applications in geohazard monitoring, urban planning, and environmental studies, offering a new avenue for enhancing InSAR data reliability through AI-powered optimization combined with robust filtering techniques.

Keywords: CNN-SAR, Lee Filter, hybrid optimization, coherence, InSAR phase unwrapping, speckle noise reduction

Procedia PDF Downloads 12
21914 US Track And Field System: Examining Micro-Level Practices against a Global Model for Integrated Development of Mass and Elite Sport

Authors: Peter Smolianov, Steven Dion, Christopher Schoen, Jaclyn Norberg, Nicholas Stone, Soufiane Rafi

Abstract:

This study assessed the micro-level elements of track and field development in the US against a model for integrating high-performance sport with mass participation. This investigation is important for the country’s international sport performance, which declined relative to other countries and wellbeing, which in its turn deteriorated as over half of the US population became overweight. A questionnaire was designed for the following elements of the model: talent identification and development as well as advanced athlete support. Survey questions were validated by 12 experts, including academics, executives from sport governing bodies, coaches, and administrators. To determine the areas for improvement, the questionnaires were completed by 102 US track and field coaches representing the country’s regions and coaching levels. Possible advancements were further identified through semi-structured discussions with 10 US track and field administrators. The study found that talent search and development is a critically important area for improvement: 49 percent of respondents had overall negative perceptions, and only 16 percent were positive regarding these US track and field practices. Both quantitative survey results and open responses revealed that the key reason for the inadequate athlete development was a shortage of well-educated and properly paid coaches: 77 percent of respondents indicated that coach expertise is never or rarely high across all participant ages and levels. More than 40 percent of the respondents were uncertain of or not familiar with world’s best talent identification and development practices, particularly methods of introducing children to track and field from outside the sport’s participation base. Millions more could be attracted to the sport by adopting best international practices. First, physical education should be offered a minimum three times a week in all school grades, and track and field together with other healthy sports, should be taught at school to all children. Second, multi-sport events, including track and field disciplines, should be organized for everyone within and among all schools, cities and regions. Three, Australian and Eastern European methods of talent search at schools should be utilized and tailored to the US conditions. Four, comprehensive long term athlete development guidelines should be used for the advancement of the American Development Model, particularly track and field tests and guidelines as part of both school education and high-performance athlete development for every age group from six to over 70 years old. These world’s best practices are to improve the country’s international performance while increasing national sport participation and positively influencing public health.

Keywords: high performance, mass participation, sport development, track and field, USA

Procedia PDF Downloads 144
21913 An Investigation on Organisation Cyber Resilience

Authors: Arniyati Ahmad, Christopher Johnson, Timothy Storer

Abstract:

Cyber exercises used to assess the preparedness of a community against cyber crises, technology failures and critical information infrastructure (CII) incidents. The cyber exercises also called cyber crisis exercise or cyber drill, involved partnerships or collaboration of public and private agencies from several sectors. This study investigates organisation cyber resilience (OCR) of participation sectors in cyber exercise called X Maya in Malaysia. This study used a principal based cyber resilience survey called C-Suite Executive checklist developed by World Economic Forum in 2012. To ensure suitability of the survey to investigate the OCR, the reliability test was conducted on C-Suite Executive checklist items. The research further investigates the differences of OCR in ten Critical National Infrastructure Information (CNII) sectors participated in the cyber exercise. The One Way ANOVA test result showed a statistically significant difference of OCR among ten CNII sectors participated in the cyber exercise.

Keywords: critical information infrastructure, cyber resilience, organisation cyber resilience, reliability test

Procedia PDF Downloads 359
21912 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 103
21911 Makhraj Recognition Using Convolutional Neural Network

Authors: Zan Azma Nasruddin, Irwan Mazlin, Nor Aziah Daud, Fauziah Redzuan, Fariza Hanis Abdul Razak

Abstract:

This paper focuses on a machine learning that learn the correct pronunciation of Makhraj Huroofs. Usually, people need to find an expert to pronounce the Huroof accurately. In this study, the researchers have developed a system that is able to learn the selected Huroofs which are ha, tsa, zho, and dza using the Convolutional Neural Network. The researchers present the chosen type of the CNN architecture to make the system that is able to learn the data (Huroofs) as quick as possible and produces high accuracy during the prediction. The researchers have experimented the system to measure the accuracy and the cross entropy in the training process.

Keywords: convolutional neural network, Makhraj recognition, speech recognition, signal processing, tensorflow

Procedia PDF Downloads 335
21910 Fire Safety Assessment of At-Risk Groups

Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen

Abstract:

Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.

Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty

Procedia PDF Downloads 103
21909 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 338