Search results for: model based engineering MBE
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38976

Search results for: model based engineering MBE

33156 Modeling False Statements in Texts

Authors: Francielle A. Vargas, Thiago A. S. Pardo

Abstract:

According to the standard philosophical definition, lying is saying something that you believe to be false with the intent to deceive. For deception detection, the FBI trains its agents in a technique named statement analysis, which attempts to detect deception based on parts of speech (i.e., linguistics style). This method is employed in interrogations, where the suspects are first asked to make a written statement. In this poster, we model false statements using linguistics style. In order to achieve this, we methodically analyze linguistic features in a corpus of fake news in the Portuguese language. The results show that they present substantial lexical, syntactic and semantic variations, as well as punctuation and emotion distinctions.

Keywords: deception detection, linguistics style, computational linguistics, natural language processing

Procedia PDF Downloads 212
33155 Influence of Bio-Based Admixture on Compressive Strength of Concrete for Columns

Authors: K. Raza, S. Gul, M. Ali

Abstract:

Concrete is a fundamental building material, extensively utilized by the construction industry. Problems related to the strength of concrete is an immense issue for the sustainability of concrete structures. Concrete mostly loses its strength due to the cracks produced in it by shrinkage or hydration process. This study aims to enhance the strength and service life of the concrete structures by incorporating bio-based admixture in the concrete. By the injection of bio-based admixture (BBA) in concrete, it will self-heal the cracks by producing calcium carbonate. Minimization of cracks will compact the microstructure of the concrete, due to which strength will increase. For this study, Bacillus subtilis will be used as a bio-based admixture (BBA) in concrete. Calcium lactate up to 1.5% will be used as the food source for the Bacillus subtilis in concrete. Two formulations containing 0 and 5% of Bacillus subtilis by weight of cement, will be used for the casting of concrete specimens. Direct mixing method will be adopted for the usage of bio-based admixture in concrete. Compressive strength test will be carried out after 28 days of curing. Scanning electron microscopy (SEM) and X-ray diffraction analysis (XRD) will be performed for the examination of micro-structure of concrete. Results will be drawn by comparing the test results of 0 and 5% the formulations. It will be recommended to use to bio-based admixture (BBA) in concrete for columns because of the satisfactory increase in the compressive strength of concrete.

Keywords: bio-based admixture, Bacillus subtilis, calcium lactate, compressive strength

Procedia PDF Downloads 219
33154 Designing Automated Embedded Assessment to Assess Student Learning in a 3D Educational Video Game

Authors: Mehmet Oren, Susan Pedersen, Sevket C. Cetin

Abstract:

Despite the frequently criticized disadvantages of the traditional used paper and pencil assessment, it is the most frequently used method in our schools. Although assessments do an acceptable measurement, they are not capable of measuring all the aspects and the richness of learning and knowledge. Also, many assessments used in schools decontextualize the assessment from the learning, and they focus on learners’ standing on a particular topic but do not concentrate on how student learning changes over time. For these reasons, many scholars advocate that using simulations and games (S&G) as a tool for assessment has significant potentials to overcome the problems in traditionally used methods. S&G can benefit from the change in technology and provide a contextualized medium for assessment and teaching. Furthermore, S&G can serve as an instructional tool rather than a method to test students’ learning at a particular time point. To investigate the potentials of using educational games as an assessment and teaching tool, this study presents the implementation and the validation of an automated embedded assessment (AEA), which can constantly monitor student learning in the game and assess their performance without intervening their learning. The experiment was conducted on an undergraduate level engineering course (Digital Circuit Design) with 99 participant students over a period of five weeks in Spring 2016 school semester. The purpose of this research study is to examine if the proposed method of AEA is valid to assess student learning in a 3D Educational game and present the implementation steps. To address this question, this study inspects three aspects of the AEA for the validation. First, the evidence-centered design model was used to lay out the design and measurement steps of the assessment. Then, a confirmatory factor analysis was conducted to test if the assessment can measure the targeted latent constructs. Finally, the scores of the assessment were compared with an external measure (a validated test measuring student learning on digital circuit design) to evaluate the convergent validity of the assessment. The results of the confirmatory factor analysis showed that the fit of the model with three latent factors with one higher order factor was acceptable (RMSEA < 0.00, CFI =1, TLI=1.013, WRMR=0.390). All of the observed variables significantly loaded to the latent factors in the latent factor model. In the second analysis, a multiple regression analysis was used to test if the external measure significantly predicts students’ performance in the game. The results of the regression indicated the two predictors explained 36.3% of the variance (R2=.36, F(2,96)=27.42.56, p<.00). It was found that students’ posttest scores significantly predicted game performance (β = .60, p < .000). The statistical results of the analyses show that the AEA can distinctly measure three major components of the digital circuit design course. It was aimed that this study can help researchers understand how to design an AEA, and showcase an implementation by providing an example methodology to validate this type of assessment.

Keywords: educational video games, automated embedded assessment, assessment validation, game-based assessment, assessment design

Procedia PDF Downloads 416
33153 Turing Pattern in the Oregonator Revisited

Authors: Elragig Aiman, Dreiwi Hanan, Townley Stuart, Elmabrook Idriss

Abstract:

In this paper, we reconsider the analysis of the Oregonator model. We highlight an error in this analysis which leads to an incorrect depiction of the parameter region in which diffusion driven instability is possible. We believe that the cause of the oversight is the complexity of stability analyses based on eigenvalues and the dependence on parameters of matrix minors appearing in stability calculations. We regenerate the parameter space where Turing patterns can be seen, and we use the common Lyapunov function (CLF) approach, which is numerically reliable, to further confirm the dependence of the results on diffusion coefficients intensities.

Keywords: diffusion driven instability, common Lyapunov function (CLF), turing pattern, positive-definite matrix

Procedia PDF Downloads 356
33152 Representation Data without Lost Compression Properties in Time Series: A Review

Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan

Abstract:

Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.

Keywords: compression properties, uncertainty, uncertain time series, mining technique, weather prediction

Procedia PDF Downloads 423
33151 Contrastive Learning for Unsupervised Object Segmentation in Sequential Images

Authors: Tian Zhang

Abstract:

Unsupervised object segmentation aims at segmenting objects in sequential images and obtaining the mask of each object without any manual intervention. Unsupervised segmentation remains a challenging task due to the lack of prior knowledge about these objects. Previous methods often require manually specifying the action of each object, which is often difficult to obtain. Instead, this paper does not need action information of objects and automatically learns the actions and relations among objects from the structured environment. To obtain the object segmentation of sequential images, the relationships between objects and images are extracted to infer the action and interaction of objects based on the multi-head attention mechanism. Three types of objects’ relationships in the object segmentation task are proposed: the relationship between objects in the same frame, the relationship between objects in two frames, and the relationship between objects and historical information. Based on these relationships, the proposed model (1) is effective in multiple objects segmentation tasks, (2) just needs images as input, and (3) produces better segmentation results as more relationships are considered. The experimental results on multiple datasets show that this paper’s method achieves state-of-art performance. The quantitative and qualitative analyses of the result are conducted. The proposed method could be easily extended to other similar applications.

Keywords: unsupervised object segmentation, attention mechanism, contrastive learning, structured environment

Procedia PDF Downloads 102
33150 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis

Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy

Abstract:

Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.

Keywords: associated cervical cancer, data mining, random forest, logistic regression

Procedia PDF Downloads 82
33149 Numerical Investigation of a New Two-Fluid Model for Semi-Dilute Polymer Solutions

Authors: Soroush Hooshyar, Mohamadali Masoudian, Natalie Germann

Abstract:

Many soft materials such as polymer solutions can develop localized bands with different shear rates, which are known as shear bands. Using the generalized bracket approach of nonequilibrium thermodynamics, we recently developed a new two-fluid model to study shear banding for semi-dilute polymer solutions. The two-fluid approach is an appropriate means for describing diffusion processes such as Fickian diffusion and stress-induced migration. In this approach, it is assumed that the local gradients in concentration and, if accounted for, also stress generate a nontrivial velocity difference between the components. Since the differential velocity is treated as a state variable in our model, the implementation of the boundary conditions arising from the derivative diffusive terms is straightforward. Our model is a good candidate for benchmark simulations because of its simplicity. We analyzed its behavior in cylindrical Couette flow, a rectilinear channel flow, and a 4:1 planar contraction flow. The latter problem was solved using the OpenFOAM finite volume package and the impact of shear banding on the lip and salient vortices was investigated. For the other smooth geometries, we employed a standard Chebyshev pseudospectral collocation method. The results showed that the steady-state solution is unique with respect to initial conditions, deformation history, and the value of the diffusivity constant. However, smaller the value of the diffusivity constant is, the more time it takes to reach the steady state.

Keywords: nonequilibrium thermodynamics, planar contraction, polymer solutions, shear banding, two-fluid approach

Procedia PDF Downloads 327
33148 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.

Keywords: anammox, filter media, kinetics, nitrogen removal

Procedia PDF Downloads 378
33147 Evaluating the Factors Influencing the Efficiency and Usage of Public Sports Services in a Chinese Province

Authors: Zhankun Wang, Timothy Makubuya

Abstract:

The efficiency of public sports service of prefecture-level cities in Zhejiang from 2008 to 2012 was evaluated by applying the DEA method, then its influencing factors were also analyzed through Tobit model. Upon analysis, the results revealed the following; (i) the change in average efficiency of public sports service in Zhejiang present a smooth uptrend and at a relatively high level from 2008 to 2012 (ii) generally, the productivity of public sports service in Zhejiang improved from 2008 to 2012, the productivity efficiency varied greatly in different years, and the regional difference of production efficiency increased. (iii) The correlations for urbanization rate, aging rate, per capita GDP and the population density were significantly positive with the public sports service efficiency in Zhejiang, of which the most significant was the aging rate. However, the population density and per capita GDP had less impact on the efficiency of public sports service in Zhejiang. In addition, whether the efficiency of public sports services in different areas in Zhejiang reciprocates to overall benefits in public wellbeing in both rural and urban settings is still arguable.

Keywords: DEA model, public sports service, efficiency, Tobit model, Malmquist productivity index, Zhejiang

Procedia PDF Downloads 282
33146 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture

Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko

Abstract:

Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.

Keywords: classification, feature selection, texture analysis, tree algorithms

Procedia PDF Downloads 173
33145 Enhanced Face Recognition with Daisy Descriptors Using 1BT Based Registration

Authors: Sevil Igit, Merve Meric, Sarp Erturk

Abstract:

In this paper, it is proposed to improve Daisy descriptor based face recognition using a novel One-Bit Transform (1BT) based pre-registration approach. The 1BT based pre-registration procedure is fast and has low computational complexity. It is shown that the face recognition accuracy is improved with the proposed approach. The proposed approach can facilitate highly accurate face recognition using DAISY descriptor with simple matching and thereby facilitate a low-complexity approach.

Keywords: face recognition, Daisy descriptor, One-Bit Transform, image registration

Procedia PDF Downloads 358
33144 Investigation on the Energy Impact of Spatial Geometry in a Residential Building Using Building Information Modeling Technology

Authors: Shashank. S. Bagane, H. N. Rajendra Prasad

Abstract:

Building Information Modeling (BIM) has currently developed into a potent solution. The consistent development of BIM technology in the sphere of Architecture, Engineering, and Construction (AEC) industry has enhanced the effectiveness of construction and decision making. However, aggrandized global warming and energy crisis has impacted on building energy analysis. It is now becoming an important factor to be considered in the AEC industry. Amalgamating energy analysis in the planning and design phase of a structure has become a necessity. In the current construction industry, estimating energy usage and reducing its footprint is of high priority. The construction industry is giving more prominence to sustainability alongside energy efficiency. This demand is compelling the designers, planners, and engineers to inspect the sustainable performance throughout the building's life cycle. The current study primarily focuses on energy consumption, space arrangement, and spatial geometry of a residential building. Most commonly residential structures in India are constructed considering Vastu Shastra. Vastu designs are intended to integrate architecture with nature and utilizing geometric patterns, symmetry, and directional alignments. In the current study, a residential brick masonry structure is considered for BIM analysis, Architectural model of the structure will be created using Revit software, later the orientation and spatial arrangement will be finalized based on Vastu principles. Furthermore, the structure will be investigated for the impact of building orientation and spatial arrangements on energy using Green Building Studio software. Based on the BIM analysis of the structure, energy consumption of subsequent building orientations will be understood. A well-orientated building having good spatial arrangement can save a considerable amount of energy throughout its life cycle and reduces the need for heating and lighting which will prove to diminish energy usage and improve the energy efficiency of the residential building.

Keywords: building information modeling, energy impact, spatial geometry, vastu

Procedia PDF Downloads 155
33143 Reliability-Based Method for Assessing Liquefaction Potential of Soils

Authors: Mehran Naghizaderokni, Asscar Janalizadechobbasty

Abstract:

This paper explores probabilistic method for assessing the liquefaction potential of sandy soils. The current simplified methods for assessing soil liquefaction potential use a deterministic safety factor in order to determine whether liquefaction will occur or not. However, these methods are unable to determine the liquefaction probability related to a safety factor. A solution to this problem can be found by reliability analysis.This paper presents a reliability analysis method based on the popular certain liquefaction analysis method. The proposed probabilistic method is formulated based on the results of reliability analyses of 190 field records and observations of soil performance against liquefaction. The results of the present study show that confidence coefficient greater and smaller than 1 does not mean safety and/or liquefaction in cadence for liquefaction, and for assuring liquefaction probability, reliability based method analysis should be used. This reliability method uses the empirical acceleration attenuation law in the Chalos area to derive the probability density distribution function and the statistics for the earthquake-induced cyclic shear stress ratio (CSR). The CSR and CRR statistics are used in continuity with the first order and second moment method to calculate the relation between the liquefaction probability, the safety factor and the reliability index. Based on the proposed method, the liquefaction probability related to a safety factor can be easily calculated. The influence of some of the soil parameters on the liquefaction probability can be quantitatively evaluated.

Keywords: liquefaction, reliability analysis, chalos area, civil and structural engineering

Procedia PDF Downloads 463
33142 Nonlinear Control of Mobile Inverted Pendulum: Theory and Experiment

Authors: V. Sankaranarayanan, V. Amrita Sundari, Sunit P. Gopal

Abstract:

This paper presents the design and implementation of a nonlinear controller for the point to point control of a mobile inverted pendulum (MIP). The controller is designed based on the kinematic model of the MIP to stabilize all the four coordinates. The stability of the closed-loop system is proved using Lyapunov stability theory. The proposed controller is validated through numerical simulations and also implemented in a laboratory prototype. The results are presented to evaluate the performance of the proposed closed loop system.

Keywords: mobile inverted pendulum, switched control, nonlinear systems, lyapunov stability

Procedia PDF Downloads 325
33141 A Different Approach to Optimize Fuzzy Membership Functions with Extended FIR Filter

Authors: Jun-Ho Chung, Sung-Hyun Yoo, In-Hwan Choi, Hyun-Kook Lee, Moon-Kyu Song, Choon-Ki Ahn

Abstract:

The extended finite impulse response (EFIR) filter is addressed to optimize membership functions (MFs) of the fuzzy model that has strong nonlinearity. MFs are important parts of the fuzzy logic system (FLS) and, thus optimizing MFs of FLS is one of approaches to improve the performance of output. We employ the EFIR as an alternative optimization option to nonlinear fuzzy model. The performance of EFIR is demonstrated on a fuzzy cruise control via a numerical example.

Keywords: fuzzy logic system, optimization, membership function, extended FIR filter

Procedia PDF Downloads 718
33140 The Challenges of Scaling Agile to Large-Scale Distributed Development: An Overview of the Agile Factory Model

Authors: Bernard Doherty, Andrew Jelfs, Aveek Dasgupta, Patrick Holden

Abstract:

Many companies have moved to agile and hybrid agile methodologies where portions of the Software Design Life-cycle (SDLC) and Software Test Life-cycle (STLC) can be time boxed in order to enhance delivery speed, quality and to increase flexibility to changes in software requirements. Despite widespread proliferation of agile practices, implementation often fails due to lack of adequate project management support, decreased motivation or fear of increased interaction. Consequently, few organizations effectively adopt agile processes with tailoring often required to integrate agile methodology in large scale environments. This paper provides an overview of the challenges in implementing an innovative large-scale tailored realization of the agile methodology termed the Agile Factory Model (AFM), with the aim of comparing and contrasting issues of specific importance to organizations undertaking large scale agile development. The conclusions demonstrate that agile practices can be effectively translated to a globally distributed development environment.

Keywords: agile, agile factory model, globally distributed development, large-scale agile

Procedia PDF Downloads 288
33139 Removal of Basic Dyes from Aqueous Solutions with a Treated Spent Bleaching Earth

Authors: M. Mana, M. S. Ouali, L. C. de Menorval

Abstract:

A spent bleaching earth from an edible oil refinery has been treated by impregnation with a normal sodium hydroxide solution followed by mild thermal treatment (100°C). The obtained material (TSBE) was washed, dried and characterized by X-ray diffraction, FTIR, SEM, BET, and thermal analysis. The clay structure was not apparently affected by the treatment and the impregnated organic matter was quantitatively removed. We have investigated the comparative sorption of safranine and methylene blue on this material, the spent bleaching earth (SBE) and the virgin bleaching earth (VBE). The kinetic results fit the pseudo second order kinetic model and the Weber & Morris, intra-particle diffusion model. The pH had no effect on the sorption efficiency. The sorption isotherms followed the Langmuir model for various sorbent concentrations with good values of determination coefficient. A linear relationship was found between the calculated maximum removal capacity and the solid/solution ratio. A comparison between the results obtained with this material and those of the literature highlighted the low cost and the good removal capacity of the treated spent bleaching earth.

Keywords: basic dyes, isotherms, sorption, spent bleaching earth

Procedia PDF Downloads 242
33138 Unified Assessment of Power System Reserve-based Reliability Levels

Authors: B. M. Alshammari, M. A. El-Kady

Abstract:

This paper presents a unified framework for assessment of reserve-based reliability levels in electric power systems. The unified approach is based on reserve-based analysis and assessment of the relationship between available generation capacities and required demand levels. The developed approach takes into account the load variations as well as contingencies which occur randomly causing some generation and/or transmission capacities to be lost (become unavailable). The calculated reserve based indices, which are important to assess the reserve capabilities of the power system for various operating scenarios are therefore probabilistic in nature. They reflect the fact that neither the load levels nor the generation or transmission capacities are known with absolute certainty. They are rather subjects to random variations and consequently. The calculated reserve-based reliability indices are all subjects to random variations where only expected values of these indices can be evaluated. This paper presents a unified approach to reserve-based reliability assessment of power systems using various reserve assessment criteria. Practical applications are also presented for demonstration purposes to the Saudi electricity power grid.

Keywords: assessment, power system, reserve, reliability

Procedia PDF Downloads 611
33137 Imputing the Minimum Social Value of Public Healthcare: A General Equilibrium Model of Israel

Authors: Erez Yerushalmi, Sani Ziv

Abstract:

The rising demand for healthcare services, without a corresponding rise in public supply, led to a debate on whether to increase private healthcare provision - especially in hospital services and second-tier healthcare. Proponents for increasing private healthcare highlight gains in efficiency, while opponents its risk to social welfare. None, however, provide a measure of the social value and its impact on the economy in terms of a monetary value. In this paper, we impute a minimum social value of public healthcare that corresponds to indifference between gains in efficiency, with losses to social welfare. Our approach resembles contingent valuation methods that introduce a hypothetical market for non-commodities, but is different from them because we use numerical simulation techniques to exploit certain market failure conditions. In this paper, we develop a general equilibrium model that distinguishes between public-private healthcare services and public-private financing. Furthermore, the social value is modelled as a by product of healthcare services. The model is then calibrated to our unique health focused Social Accounting Matrix of Israel, and simulates the introduction of a hypothetical health-labour market - given that it is heavily regulated in the baseline (i.e., the true situation in Israel today). For baseline parameters, we estimate the minimum social value at around 18% public healthcare financing. The intuition is that the gain in economic welfare from improved efficiency, is offset by the loss in social welfare due to a reduction in available social value. We furthermore simulate a deregulated healthcare scenario that internalizes the imputed value of social value and searches for the optimal weight of public and private healthcare provision.

Keywords: contingent valuation method (CVM), general equilibrium model, hypothetical market, private-public healthcare, social value of public healthcare

Procedia PDF Downloads 144
33136 Urban Flood Resilience Comprehensive Assessment of "720" Rainstorm in Zhengzhou Based on Multiple Factors

Authors: Meiyan Gao, Zongmin Wang, Haibo Yang, Qiuhua Liang

Abstract:

Under the background of global climate change and rapid development of modern urbanization, the frequency of climate disasters such as extreme precipitation in cities around the world is gradually increasing. In this paper, Hi-PIMS model is used to simulate the "720" flood in Zhengzhou, and the continuous stages of flood resilience are determined with the urban flood stages are divided. The flood resilience curve under the influence of multiple factors were determined and the urban flood toughness was evaluated by combining the results of resilience curves. The flood resilience of urban unit grid was evaluated based on economy, population, road network, hospital distribution and land use type. Firstly, the rainfall data of meteorological stations near Zhengzhou and the remote sensing rainfall data from July 17 to 22, 2021 were collected. The Kriging interpolation method was used to expand the rainfall data of Zhengzhou. According to the rainfall data, the flood process generated by four rainfall events in Zhengzhou was reproduced. Based on the results of the inundation range and inundation depth in different areas, the flood process was divided into four stages: absorption, resistance, overload and recovery based on the once in 50 years rainfall standard. At the same time, based on the levels of slope, GDP, population, hospital affected area, land use type, road network density and other aspects, the resilience curve was applied to evaluate the urban flood resilience of different regional units, and the difference of flood process of different precipitation in "720" rainstorm in Zhengzhou was analyzed. Faced with more than 1,000 years of rainstorm, most areas are quickly entering the stage of overload. The influence levels of factors in different areas are different, some areas with ramps or higher terrain have better resilience, and restore normal social order faster, that is, the recovery stage needs shorter time. Some low-lying areas or special terrain, such as tunnels, will enter the overload stage faster in the case of heavy rainfall. As a result, high levels of flood protection, water level warning systems and faster emergency response are needed in areas with low resilience and high risk. The building density of built-up area, population of densely populated area and road network density all have a certain negative impact on urban flood resistance, and the positive impact of slope on flood resilience is also very obvious. While hospitals can have positive effects on medical treatment, they also have negative effects such as population density and asset density when they encounter floods. The result of a separate comparison of the unit grid of hospitals shows that the resilience of hospitals in the distribution range is low when they encounter floods. Therefore, in addition to improving the flood resistance capacity of cities, through reasonable planning can also increase the flood response capacity of cities. Changes in these influencing factors can further improve urban flood resilience, such as raise design standards and the temporary water storage area when floods occur, train the response speed of emergency personnel and adjust emergency support equipment.

Keywords: urban flood resilience, resilience assessment, hydrodynamic model, resilience curve

Procedia PDF Downloads 37
33135 An Inspection of Two Layer Model of Agency: An fMRI Study

Authors: Keyvan Kashkouli Nejad, Motoaki Sugiura, Atsushi Sato, Takayuki Nozawa, Hyeonjeong Jeong, Sugiko Hanawa , Yuka Kotozaki, Ryuta Kawashima

Abstract:

The perception of agency/control is altered with presence of discrepancies in the environment or mismatch of predictions (of possible results) and actual results the sense of agency might become altered. Synofzik et al. proposed a two layer model of agency: In the first layer, the Feeling of Agency (FoA) is not directly available to awareness; a slight mismatch in the environment/outcome might cause alterations in FoA, while the agent still feels in control. If the discrepancy passes a threshold, it becomes available to consciousness and alters Judgment of Agency (JoA), which is directly available in the person’s awareness. Most experiments so far only investigate subjects rather conscious JoA, while FoA has been neglected. In this experiment we target FoA by using subliminal discrepancies that can not be consciously detectable by the subjects. Here, we explore whether we can detect this two level model in the subjects behavior and then try to map this in their brain activity. To do this, in a fMRI study, we incorporated both consciously detectable mismatching between action and result and also subliminal discrepancies in the environment. Also, unlike previous experiments where subjective questions from the participants mainly trigger the rather conscious JoA, we also tried to measure the rather implicit FoA by asking participants to rate their performance. We compared behavioral results and also brain activation when there were conscious discrepancies and when there were subliminal discrepancies against trials with no discrepancies and against each other. In line with our expectations, conditions with consciously detectable incongruencies triggered lower JoA ratings than conditions without. Also, conditions with any type of discrepancies had lower FoA ratings compared to conditions without. Additionally, we found out that TPJ and angular gyrus in particular to have a role in coding of JoA and also FoA.

Keywords: agency, fMRI, TPJ, two layer model

Procedia PDF Downloads 467
33134 A Survey of Grammar-Based Genetic Programming and Applications

Authors: Matthew T. Wilson

Abstract:

This paper covers a selection of research utilizing grammar-based genetic programming, and illustrates how context-free grammar can be used to constrain genetic programming. It focuses heavily on grammatical evolution, one of the most popular variants of grammar-based genetic programming, and the way its operators and terminals are specialized and modified from those in genetic programming. A variety of implementations of grammatical evolution for general use are covered, as well as research each focused on using grammatical evolution or grammar-based genetic programming on a single application, or to solve a specific problem, including some of the classically considered genetic programming problems, such as the Santa Fe Trail.

Keywords: context-free grammar, genetic algorithms, genetic programming, grammatical evolution

Procedia PDF Downloads 182
33133 Optimal Price Points in Differential Pricing

Authors: Katerina Kormusheva

Abstract:

Pricing plays a pivotal role in the marketing discipline as it directly influences consumer perceptions, purchase decisions, and overall market positioning of a product or service. This paper seeks to expand current knowledge in the area of discriminatory and differential pricing, a main area of marketing research. The methodology includes developing a framework and a model for determining how many price points to implement in differential pricing. We focus on choosing the levels of differentiation, derive a function form of the model framework proposed, and lastly, test it empirically with data from a large-scale marketing pricing experiment of services in telecommunications.

Keywords: marketing, differential pricing, price points, optimization

Procedia PDF Downloads 88
33132 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence

Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno

Abstract:

Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.

Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index

Procedia PDF Downloads 166
33131 Performance Optimization on Waiting Time Using Queuing Theory in an Advanced Manufacturing Environment: Robotics to Enhance Productivity

Authors: Ganiyat Soliu, Glen Bright, Chiemela Onunka

Abstract:

Performance optimization plays a key role in controlling the waiting time during manufacturing in an advanced manufacturing environment to improve productivity. Queuing mathematical modeling theory was used to examine the performance of the multi-stage production line. Robotics as a disruptive technology was implemented into a virtual manufacturing scenario during the packaging process to study the effect of waiting time on productivity. The queuing mathematical model was used to determine the optimum service rate required by robots during the packaging stage of manufacturing to yield an optimum production cost. Different rates of production were assumed in a virtual manufacturing environment, cost of packaging was estimated with optimum production cost. An equation was generated using queuing mathematical modeling theory and the theorem adopted for analysis of the scenario is the Newton Raphson theorem. Queuing theory presented here provides an adequate analysis of the number of robots required to regulate waiting time in order to increase the number of output. Arrival rate of the product was fast which shows that queuing mathematical model was effective in minimizing service cost and the waiting time during manufacturing. At a reduced waiting time, there was an improvement in the number of products obtained per hour. The overall productivity was improved based on the assumptions used in the queuing modeling theory implemented in the virtual manufacturing scenario.

Keywords: performance optimization, productivity, queuing theory, robotics

Procedia PDF Downloads 145
33130 Numerical Study on the Flow around a Steadily Rotating Spring: Understanding the Propulsion of a Bacterial Flagellum

Authors: Won Yeol Choi, Sangmo Kang

Abstract:

The propulsion of a bacterial flagellum in a viscous fluid has attracted many interests in the field of biological hydrodynamics, but remains yet fully understood and thus still a challenging problem. In this study, therefore, we have numerically investigated the flow around a steadily rotating micro-sized spring to further understand such bacterial flagellum propulsion. Note that a bacterium gains thrust (propulsive force) by rotating the flagellum connected to the body through a bio motor to move forward. For the investigation, we convert the spring model from the micro scale to the macro scale using a similitude law (scale law) and perform simulations on the converted macro-scale model using a commercial software package, CFX v13 (ANSYS). To scrutinize the propulsion characteristics of the flagellum through the simulations, we make parameter studies by changing some flow parameters, such as the pitch, helical radius and rotational speed of the spring and the Reynolds number (or fluid viscosity), expected to affect the thrust force experienced by the rotating spring. Results show that the propulsion characteristics depend strongly on the parameters mentioned above. It is observed that the forward thrust increases in a linear fashion with either of the rotational speed or the fluid viscosity. In addition, the thrust is directly proportional to square of the helical radius and but the thrust force is increased and then decreased based on the peak value to the pitch. Finally, we also present the appropriate flow and pressure fields visualized to support the observations.

Keywords: fluid viscosity, hydrodynamics, similitude, propulsive force

Procedia PDF Downloads 347
33129 Classification of Poverty Level Data in Indonesia Using the Naïve Bayes Method

Authors: Anung Style Bukhori, Ani Dijah Rahajoe

Abstract:

Poverty poses a significant challenge in Indonesia, requiring an effective analytical approach to understand and address this issue. In this research, we applied the Naïve Bayes classification method to examine and classify poverty data in Indonesia. The main focus is on classifying data using RapidMiner, a powerful data analysis platform. The analysis process involves data splitting to train and test the classification model. First, we collected and prepared a poverty dataset that includes various factors such as education, employment, and health..The experimental results indicate that the Naïve Bayes classification model can provide accurate predictions regarding the risk of poverty. The use of RapidMiner in the analysis process offers flexibility and efficiency in evaluating the model's performance. The classification produces several values to serve as the standard for classifying poverty data in Indonesia using Naive Bayes. The accuracy result obtained is 40.26%, with a moderate recall result of 35.94%, a high recall result of 63.16%, and a low recall result of 38.03%. The precision for the moderate class is 58.97%, for the high class is 17.39%, and for the low class is 58.70%. These results can be seen from the graph below.

Keywords: poverty, classification, naïve bayes, Indonesia

Procedia PDF Downloads 49
33128 Affordable Aerodynamic Balance for Instrumentation in a Wind Tunnel Using Arduino

Authors: Pedro Ferreira, Alexandre Frugoli, Pedro Frugoli, Lucio Leonardo, Thais Cavalheri

Abstract:

The teaching of fluid mechanics in engineering courses is, in general, a source of great difficulties for learning. The possibility of the use of experiments with didactic wind tunnels can facilitate the education of future professionals. The objective of this proposal is the development of a low-cost aerodynamic balance to be used in a didactic wind tunnel. The set is comprised of an Arduino microcontroller, programmed by an open source software, linked to load cells built by students from another project. The didactic wind tunnel is 5,0m long and the test area is 90,0 cm x 90,0 cm x 150,0 cm. The Weq® electric motor, model W-22 of 9,2 HP, moves a fan with nine blades, each blade 32,0 cm long. The Weq® frequency inverter, model WEGCFW 08 (Vector Inverter) is responsible for wind speed control and also for the motor inversion of the rotational direction. A flat-convex profile prototype of airfoil was tested by measuring the drag and lift forces for certain attack angles; the air flux conditions remained constant, monitored by a Pitot tube connected to a EXTECH® Instruments digital pressure differential manometer Model HD755. The results indicate a good agreement with the theory. The choice of all of the components of this proposal resulted in a low-cost product providing a high level of specific knowledge of mechanics of fluids, which may be a good alternative to teaching in countries with scarce educational resources. The system also allows the expansion to measure other parameters like fluid velocity, temperature, pressure as well as the possibility of automation of other functions.

Keywords: aerodynamic balance, wind tunnel, strain gauge, load cell, Arduino, low-cost education

Procedia PDF Downloads 436
33127 Enhancing Email Security: A Multi-Layered Defense Strategy Approach and an AI-Powered Model for Identifying and Mitigating Phishing Attacks

Authors: Anastasios Papathanasiou, George Liontos, Athanasios Katsouras, Vasiliki Liagkou, Euripides Glavas

Abstract:

Email remains a crucial communication tool due to its efficiency, accessibility and cost-effectiveness, enabling rapid information exchange across global networks. However, the global adoption of email has also made it a prime target for cyber threats, including phishing, malware and Business Email Compromise (BEC) attacks, which exploit its integral role in personal and professional realms in order to perform fraud and data breaches. To combat these threats, this research advocates for a multi-layered defense strategy incorporating advanced technological tools such as anti-spam and anti-malware software, machine learning algorithms and authentication protocols. Moreover, we developed an artificial intelligence model specifically designed to analyze email headers and assess their security status. This AI-driven model examines various components of email headers, such as "From" addresses, ‘Received’ paths and the integrity of SPF, DKIM and DMARC records. Upon analysis, it generates comprehensive reports that indicate whether an email is likely to be malicious or benign. This capability empowers users to identify potentially dangerous emails promptly, enhancing their ability to avoid phishing attacks, malware infections and other cyber threats.

Keywords: email security, artificial intelligence, header analysis, threat detection, phishing, DMARC, DKIM, SPF, ai model

Procedia PDF Downloads 48