Search results for: zinc based metal matrix composites
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31165

Search results for: zinc based metal matrix composites

25645 Cost-Effective Hybrid Cloud Framework for HEI’s

Authors: Shah Muhammad Butt, Ahmed Masaud Ansari

Abstract:

Present Financial crisis in Higher Educational Institutes (HEIs) facing lots of problems considerable budget cuts, make difficult to meet the ever growing IT-based research and learning needs, institutions are rapidly planning and promoting cloud-based approaches for their academic and research needs. A cost effective Hybrid Cloud framework for HEI’s will provide educational services for campus or intercampus communication. Hybrid Cloud Framework comprises Private and Public Cloud approaches. This paper will propose the framework based on the Open Source Cloud (OpenNebula for Virtualization, Eucalyptus for Infrastructure, and Aneka for programming development environment) combined with CSP’s services which are delivered to the end-user via the Internet from public clouds.

Keywords: educational services, hybrid campus cloud, open source, electrical and systems sciences

Procedia PDF Downloads 440
25644 Robust Variable Selection Based on Schwarz Information Criterion for Linear Regression Models

Authors: Shokrya Saleh A. Alshqaq, Abdullah Ali H. Ahmadini

Abstract:

The Schwarz information criterion (SIC) is a popular tool for selecting the best variables in regression datasets. However, SIC is defined using an unbounded estimator, namely, the least-squares (LS), which is highly sensitive to outlying observations, especially bad leverage points. A method for robust variable selection based on SIC for linear regression models is thus needed. This study investigates the robustness properties of SIC by deriving its influence function and proposes a robust SIC based on the MM-estimation scale. The aim of this study is to produce a criterion that can effectively select accurate models in the presence of vertical outliers and high leverage points. The advantages of the proposed robust SIC is demonstrated through a simulation study and an analysis of a real dataset.

Keywords: influence function, robust variable selection, robust regression, Schwarz information criterion

Procedia PDF Downloads 126
25643 Model Solutions for Performance-Based Seismic Analysis of an Anchored Sheet Pile Quay Wall

Authors: C. J. W. Habets, D. J. Peters, J. G. de Gijt, A. V. Metrikine, S. N. Jonkman

Abstract:

Conventional seismic designs of quay walls in ports are mostly based on pseudo-static analysis. A more advanced alternative is the Performance-Based Design (PBD) method, which evaluates permanent deformations and amounts of (repairable) damage under seismic loading. The aim of this study is to investigate the suitability of this method for anchored sheet pile quay walls that were not purposely designed for seismic loads. A research methodology is developed in which pseudo-static, permanent-displacement and finite element analysis are employed, calibrated with an experimental reference case that considers a typical anchored sheet pile wall. A reduction factor that accounts for deformation behaviour is determined for pseudo-static analysis. A model to apply traditional permanent displacement analysis on anchored sheet pile walls is proposed. Dynamic analysis is successfully carried out. From the research it is concluded that PBD evaluation can effectively be used for seismic analysis and design of this type of structure.

Keywords: anchored sheet pile quay wall, simplified dynamic analysis, performance-based design, pseudo-static analysis

Procedia PDF Downloads 368
25642 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility

Authors: Fu Jinyu, Lin Jinguan

Abstract:

This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.

Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate

Procedia PDF Downloads 140
25641 Self-Regulated Learning: A Required Skill for Web 2.0 Internet-Based Learning

Authors: Pieter Conradie, M. Marina Moller

Abstract:

Web 2.0 Internet-based technologies have intruded all aspects of human life. Presently, this phenomenon is especially evident in the educational context, with increased disruptive Web 2.0 technology infusions dramatically changing educational practice. The most prominent of these Web 2.0 intrusions can be identified as Massive Open Online Courses (Coursera, EdX), video and photo sharing sites (Youtube, Flickr, Instagram), and Web 2.0 online tools utilize to create Personal Learning Environments (PLEs) (Symbaloo (aggregator), Delicious (social bookmarking), PBWorks (collaboration), Google+ (social networks), Wordspress (blogs), Wikispaces (wiki)). These Web 2.0 technologies have supported the realignment from a teacher-based pedagogy (didactic presentation) to a learner-based pedagogy (problem-based learning, project-based learning, blended learning), allowing greater learner autonomy. No longer is the educator the source of knowledge. Instead the educator has become the facilitator and mediator of the learner, involved in developing learner competencies to support life-long learning (continuous learning) in the 21st century. In this study, the self-regulated learning skills of thirty first-year university learners were explored by utilizing the Online Self-regulated Learning Questionnaire. Implementing an action research method, an intervention was affected towards improving the self-regulation skill set of the participants. Statistical significant results were obtained with increased self-regulated learning proficiency, positively impacting learner performance. Goal setting, time management, environment structuring, help seeking, task (learning) strategies and self-evaluation skills were confirmed as determinants of improved learner success.

Keywords: andragogy, online self-regulated learning questionnaire, self-regulated learning, web 2.0

Procedia PDF Downloads 402
25640 Imputing the Minimum Social Value of Public Healthcare: A General Equilibrium Model of Israel

Authors: Erez Yerushalmi, Sani Ziv

Abstract:

The rising demand for healthcare services, without a corresponding rise in public supply, led to a debate on whether to increase private healthcare provision - especially in hospital services and second-tier healthcare. Proponents for increasing private healthcare highlight gains in efficiency, while opponents its risk to social welfare. None, however, provide a measure of the social value and its impact on the economy in terms of a monetary value. In this paper, we impute a minimum social value of public healthcare that corresponds to indifference between gains in efficiency, with losses to social welfare. Our approach resembles contingent valuation methods that introduce a hypothetical market for non-commodities, but is different from them because we use numerical simulation techniques to exploit certain market failure conditions. In this paper, we develop a general equilibrium model that distinguishes between public-private healthcare services and public-private financing. Furthermore, the social value is modelled as a by product of healthcare services. The model is then calibrated to our unique health focused Social Accounting Matrix of Israel, and simulates the introduction of a hypothetical health-labour market - given that it is heavily regulated in the baseline (i.e., the true situation in Israel today). For baseline parameters, we estimate the minimum social value at around 18% public healthcare financing. The intuition is that the gain in economic welfare from improved efficiency, is offset by the loss in social welfare due to a reduction in available social value. We furthermore simulate a deregulated healthcare scenario that internalizes the imputed value of social value and searches for the optimal weight of public and private healthcare provision.

Keywords: contingent valuation method (CVM), general equilibrium model, hypothetical market, private-public healthcare, social value of public healthcare

Procedia PDF Downloads 130
25639 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution

Authors: Nikolay P. Brayanov, Anna V. Stoynova

Abstract:

Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.

Keywords: embedded code generation, embedded C code quality, embedded systems, model-based development

Procedia PDF Downloads 228
25638 Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion

Authors: Adnan A. Y. Mustafa

Abstract:

Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar.

Keywords: binary image, dissimilarity detection, probabilistic matching model for binary images, image mapping

Procedia PDF Downloads 138
25637 Assessment-Assisted and Relationship-Based Financial Advising: Using an Empirical Assessment to Understand Personal Investor Risk Tolerance in Professional Advising Relationships

Authors: Jerry Szatko, Edan L. Jorgensen, Stacia Jorgensen

Abstract:

A crucial component to the success of any financial advising relationship is for the financial professional to understand the perceptions, preferences and thought-processes carried by the financial clients they serve. Armed with this information, financial professionals are more quickly able to understand how they can tailor their approach to best match the individual preferences and needs of each personal investor. Our research explores the use of a quantitative assessment tool in the financial services industry to assist in the identification of the personal investor’s consumer behaviors, especially in terms of financial risk tolerance, as it relates to their financial decision making. Through this process, the Unitifi Consumer Insight Tool (UCIT) was created and refined to capture and categorize personal investor financial behavioral categories and the financial personality tendencies of individuals prior to the initiation of a financial advisement relationship. This paper discusses the use of this tool to place individuals in one of four behavior-based financial risk tolerance categories. Our discoveries and research were aided through administration of a web-based survey to a group of over 1,000 individuals. Our findings indicate that it is possible to use a quantitative assessment tool to assist in predicting the behavioral tendencies of personal consumers when faced with consumer financial risk and decisions.

Keywords: behavior-based advising, financial relationship building, risk capacity based on behavior, risk tolerance, systematic way to assist in financial relationship building

Procedia PDF Downloads 154
25636 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 135
25635 Arabic Lexicon Learning to Analyze Sentiment in Microblogs

Authors: Mahmoud B. Rokaya

Abstract:

The study of opinion mining and sentiment analysis includes analysis of opinions, sentiments, evaluations, attitudes, and emotions. The rapid growth of social media, social networks, reviews, forum discussions, microblogs, and Twitter, leads to a parallel growth in the field of sentiment analysis. The field of sentiment analysis tries to develop effective tools to make it possible to capture the trends of people. There are two approaches in the field, lexicon-based and corpus-based methods. A lexicon-based method uses a sentiment lexicon which includes sentiment words and phrases with assigned numeric scores. These scores reveal if sentiment phrases are positive or negative, their intensity, and/or their emotional orientations. Creation of manual lexicons is hard. This brings the need for adaptive automated methods for generating a lexicon. The proposed method generates dynamic lexicons based on the corpus and then classifies text using these lexicons. In the proposed method, different approaches are combined to generate lexicons from text. The proposed method classifies the tweets into 5 classes instead of +ve or –ve classes. The sentiment classification problem is written as an optimization problem, finding optimum sentiment lexicons are the goal of the optimization process. The solution was produced based on mathematical programming approaches to find the best lexicon to classify texts. A genetic algorithm was written to find the optimal lexicon. Then, extraction of a meta-level feature was done based on the optimal lexicon. The experiments were conducted on several datasets. Results, in terms of accuracy, recall and F measure, outperformed the state-of-the-art methods proposed in the literature in some of the datasets. A better understanding of the Arabic language and culture of Arab Twitter users and sentiment orientation of words in different contexts can be achieved based on the sentiment lexicons proposed by the algorithm.

Keywords: social media, Twitter sentiment, sentiment analysis, lexicon, genetic algorithm, evolutionary computation

Procedia PDF Downloads 169
25634 Demand for Index Based Micro-Insurance (IBMI) in Ethiopia

Authors: Ashenafi Sileshi Etefa, Bezawit Worku Yenealem

Abstract:

Micro-insurance is a relatively new concept that is just being introduced in Ethiopia. For an agrarian economy dominated by small holder farming and vulnerable to natural disasters, mainly drought, the need for an Index-Based Micro Insurance (IBMI) is crucial. Since IBMI solves moral hazard, adverse selection, and access issues to poor clients, it is preferable over traditional insurance products. IBMI is being piloted in drought prone areas of Ethiopia with the aim of learning and expanding the service across the country. This article analyses the demand of IBMI and the barriers to demand and finds that the demand for IBMI has so far been constrained by lack of awareness, trust issues, costliness, and the level of basis risk; and recommends reducing the basis risk and increasing the role of government and farmer cooperatives.

Keywords: agriculture, index based micro-insurance (IBMI), drought, micro-finance institution (MFI)

Procedia PDF Downloads 275
25633 Scalable Learning of Tree-Based Models on Sparsely Representable Data

Authors: Fares Hedayatit, Arnauld Joly, Panagiotis Papadimitriou

Abstract:

Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library.

Keywords: big data, sparsely representable data, tree-based models, scalable learning

Procedia PDF Downloads 248
25632 Combining the Dynamic Conditional Correlation and Range-GARCH Models to Improve Covariance Forecasts

Authors: Piotr Fiszeder, Marcin Fałdziński, Peter Molnár

Abstract:

The dynamic conditional correlation model of Engle (2002) is one of the most popular multivariate volatility models. However, this model is based solely on closing prices. It has been documented in the literature that the high and low price of the day can be used in an efficient volatility estimation. We, therefore, suggest a model which incorporates high and low prices into the dynamic conditional correlation framework. Empirical evaluation of this model is conducted on three datasets: currencies, stocks, and commodity exchange-traded funds. The utilisation of realized variances and covariances as proxies for true variances and covariances allows us to reach a strong conclusion that our model outperforms not only the standard dynamic conditional correlation model but also a competing range-based dynamic conditional correlation model.

Keywords: volatility, DCC model, high and low prices, range-based models, covariance forecasting

Procedia PDF Downloads 167
25631 A Review on the Use of Herbal Alternatives to Antibiotics in Poultry Diets

Authors: Sasan Chalaki, Seyed Ali Mirgholange, Touba Nadri, Saman Chalaki

Abstract:

In the current world, proper poultry nutrition has garnered special attention as one of the fundamental factors for enhancing their health and performance. Concerns related to the excessive use of antibiotics in the poultry industry and their role in antibiotic resistance have transformed this issue into a global challenge in public health and the environment. On the other hand, poultry farming plays a vital role as a primary source of meat and eggs in human nutrition, and improving their health and performance is crucial. One effective approach to enhance poultry nutrition is the utilization of the antibiotic properties of plant-based ingredients. The use of plant-based alternatives as natural antibiotics in poultry nutrition not only aids in improving poultry health and performance but also plays a significant role in reducing the consumption of synthetic antibiotics and preventing antibiotic resistance-related issues. Plants contain various antibacterial compounds, such as flavonoids, tannins, and essential oils. These compounds are recognized as active agents in combating bacteria. Plant-based antibiotics are compounds extracted from plants with antibacterial properties. They are acknowledged as effective substitutes for chemical antibiotics in poultry diets. The advantages of plant-based antibiotics include reducing the risk of resistance to chemical antibiotics, increasing poultry growth performance, and lowering the risk of disease transmission.

Keywords: poultry, antibiotics, essential oils, plant-based

Procedia PDF Downloads 53
25630 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis

Procedia PDF Downloads 74
25629 Case-Based Reasoning: A Hybrid Classification Model Improved with an Expert's Knowledge for High-Dimensional Problems

Authors: Bruno Trstenjak, Dzenana Donko

Abstract:

Data mining and classification of objects is the process of data analysis, using various machine learning techniques, which is used today in various fields of research. This paper presents a concept of hybrid classification model improved with the expert knowledge. The hybrid model in its algorithm has integrated several machine learning techniques (Information Gain, K-means, and Case-Based Reasoning) and the expert’s knowledge into one. The knowledge of experts is used to determine the importance of features. The paper presents the model algorithm and the results of the case study in which the emphasis was put on achieving the maximum classification accuracy without reducing the number of features.

Keywords: case based reasoning, classification, expert's knowledge, hybrid model

Procedia PDF Downloads 357
25628 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding

Procedia PDF Downloads 292
25627 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: consensus, curse of correlation, imbalance classification, rank-based chain-mode ensemble

Procedia PDF Downloads 122
25626 Development of Expanded Perlite-Caprylicacid Composite for Temperature Maintainance in Buildings

Authors: Akhila Konala, Jagadeeswara Reddy Vennapusa, Sujay Chattopadhyay

Abstract:

The energy consumption of humankind is growing day by day due to an increase in the population, industrialization and their needs for living. Fossil fuels are the major source of energy to satisfy energy needs, which are non-renewable energy resources. So, there is a need to develop green resources for energy production and storage. Phase change materials (PCMs) derived from plants (green resources) are well known for their capacity to store the thermal energy as latent heat during their phase change from solid to liquid. This property of PCM could be used for storage of thermal energy. In this study, a composite with fatty acid (caprylic acid; M.P 15°C, Enthalpy 179kJ/kg) as a phase change material and expanded perlite as support porous matrix was prepared through direct impregnation method for thermal energy storage applications. The prepared composite was characterized using Differential scanning calorimetry (DSC), Field Emission Scanning Electron Microscope (FESEM), Thermal Gravimetric Analysis (TGA), and Fourier Transform Infrared (FTIR) spectrometer. The melting point of the prepared composite was 15.65°C, and the melting enthalpy was 82kJ/kg. The surface nature of the perlite was observed through FESEM. It was observed that there are micro size pores in the perlite surface, which were responsible for the absorption of PCM into perlite. In TGA thermogram, the PCM loss from composite was started at ~90°C. FTIR curves proved there was no chemical interaction between the perlite and caprylic acid. So, the PCM composite prepared in this work could be effective to use in temperature maintenance of buildings.

Keywords: caprylic acid, composite, phase change materials, PCM, perlite, thermal energy

Procedia PDF Downloads 108
25625 Design and Optimization of Open Loop Supply Chain Distribution Network Using Hybrid K-Means Cluster Based Heuristic Algorithm

Authors: P. Suresh, K. Gunasekaran, R. Thanigaivelan

Abstract:

Radio frequency identification (RFID) technology has been attracting considerable attention with the expectation of improved supply chain visibility for consumer goods, apparel, and pharmaceutical manufacturers, as well as retailers and government procurement agencies. It is also expected to improve the consumer shopping experience by making it more likely that the products they want to purchase are available. Recent announcements from some key retailers have brought interest in RFID to the forefront. A modified K- Means Cluster based Heuristic approach, Hybrid Genetic Algorithm (GA) - Simulated Annealing (SA) approach, Hybrid K-Means Cluster based Heuristic-GA and Hybrid K-Means Cluster based Heuristic-GA-SA for Open Loop Supply Chain Network problem are proposed. The study incorporated uniform crossover operator and combined crossover operator in GAs for solving open loop supply chain distribution network problem. The algorithms are tested on 50 randomly generated data set and compared with each other. The results of the numerical experiments show that the Hybrid K-means cluster based heuristic-GA-SA, when tested on 50 randomly generated data set, shows superior performance to the other methods for solving the open loop supply chain distribution network problem.

Keywords: RFID, supply chain distribution network, open loop supply chain, genetic algorithm, simulated annealing

Procedia PDF Downloads 151
25624 A Comparison of Neural Network and DOE-Regression Analysis for Predicting Resource Consumption of Manufacturing Processes

Authors: Frank Kuebler, Rolf Steinhilper

Abstract:

Artificial neural networks (ANN) as well as Design of Experiments (DOE) based regression analysis (RA) are mainly used for modeling of complex systems. Both methodologies are commonly applied in process and quality control of manufacturing processes. Due to the fact that resource efficiency has become a critical concern for manufacturing companies, these models needs to be extended to predict resource-consumption of manufacturing processes. This paper describes an approach to use neural networks as well as DOE based regression analysis for predicting resource consumption of manufacturing processes and gives a comparison of the achievable results based on an industrial case study of a turning process.

Keywords: artificial neural network, design of experiments, regression analysis, resource efficiency, manufacturing process

Procedia PDF Downloads 509
25623 Deep-Learning Based Approach to Facial Emotion Recognition through Convolutional Neural Network

Authors: Nouha Khediri, Mohammed Ben Ammar, Monji Kherallah

Abstract:

Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. Accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER, benefiting from deep learning, especially CNN and VGG16. First, the data is pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work.

Keywords: CNN, deep-learning, facial emotion recognition, machine learning

Procedia PDF Downloads 75
25622 A Survey of Recognizing of Daily Living Activities in Multi-User Smart Home Environments

Authors: Kulsoom S. Bughio, Naeem K. Janjua, Gordana Dermody, Leslie F. Sikos, Shamsul Islam

Abstract:

The advancement in information and communication technologies (ICT) and wireless sensor networks have played a pivotal role in the design and development of real-time healthcare solutions, mainly targeting the elderly living in health-assistive smart homes. Such smart homes are equipped with sensor technologies to detect and record activities of daily living (ADL). This survey reviews and evaluates existing approaches and techniques based on real-time sensor-based modeling and reasoning in single-user and multi-user environments. It classifies the approaches into three main categories: learning-based, knowledge-based, and hybrid, and evaluates how they handle temporal relations, granularity, and uncertainty. The survey also highlights open challenges across various disciplines (including computer and information sciences and health sciences) to encourage interdisciplinary research for the detection and recognition of ADLs and discusses future directions.

Keywords: daily living activities, smart homes, single-user environment, multi-user environment

Procedia PDF Downloads 124
25621 A Comparative Study of Substituted Li Ferrites Sintered by the Conventional and Microwave Sintering Technique

Authors: Ibetombi Soibam

Abstract:

Li-Zn-Ni ferrite having the compositional formula Li0.4-0.5xZn0.2NixFe2.4-0.5xO4 where x = 0.02 ≤ x ≤0.1 in steps of 0.02 was fabricated by the citrate precursor method. In this method, metal nitrates and citric acid was used to prepare the gel which exhibit self-propagating combustion behavior giving the required ferrite sample. The ferrite sample was given a pre-firing at 650°C in a programmable conventional furnace for 3 hours with a heating rate of 5°C/min. A series of the sample was finally given conventional sintering (CS) at 1040°C after the pre-firing process. Another series was given microwave sintering (MS) at 1040°C in a programmable microwave furnace which uses a single magnetron operating at 2.45 GHz frequency. X- ray diffraction pattern confirmed the spinel phase structure for both the series. The theoretical and experimental density was calculated. It was observed that densification increases with the increase in Ni concentration in both the series. However, samples sintered by microwave technique was found to be denser. The microstructure of the two series of the sample was examined using scanning electron microscopy (SEM). Dielectric properties have been investigated as a function of frequency and composition for both series of samples sintered by CS and MS technique. The variation of dielectric constant with frequency show dispersion for both the series. It was explained in terms of Koop’s two layer model. From the analysis of dielectric measurement, it was observed that the value of room temperature dielectric constant decreases with the increase in Ni concentration for both the series. The microwave sintered samples show a lower dielectric constant making microwave sintering suitable for high-frequency applications. The possible mechanisms contributing to all the above behavior is being discussed.

Keywords: citrate precursor, dielectric constant, ferrites, microwave sintering

Procedia PDF Downloads 389
25620 Design and Evaluation of an Online Case-Based Library for Technology Integration in Teacher Education

Authors: Mustafa Tevfik Hebebci, Ismail Sahin, Sirin Kucuk, Ismail Celik, Ahmet Oguz Akturk

Abstract:

ADDIE is an instructional design model which has the five core elements: analyze, design, develop, implement, and evaluate. The ADDIE approach provides a systematic process for the analysis of instructional needs, the design and development of instructional programs and materials, implementation of a program, and the evaluation of the effectiveness of an instruction. The case-based study is an instructional design model that is a variant of project-oriented learning. Collecting and analyzing stories can be used in two primary ways -perform task analysis and as a learning support during instruction- by instructional designers. Besides, teachers use technology to develop students’ thinking, enriching the learning environment and providing permanent learning. The purpose of this paper is to introduce an interactive online case-study library website developed in a national project. The design goal of the website is to provide interactive, enhanced, case-based and online educational resource for educators through the purpose and within the scope of a national project. The ADDIE instructional design model was used in the development of the website for the interactive case-based library. This web-based library contains the navigation menus as the follows: “Homepage”, "Registration", "Branches", "Aim of The Research", "About TPACK", "National Project", "Contact Us", etc. This library is developed on a web-based platform, which is important in terms of manageability, accessibility, and updateability of data. Users are able to sort the displayed case-studies by their titles, dates, ratings, view counts, etc. In addition, they encouraged to rate and comment on the case-studies. The usability test is used and the expert opinion is taken for the evaluation of the website. This website is a tool to integrate technology in education. It is believed that this website will be beneficial for pre-service and in-service teachers in terms of their professional developments.

Keywords: design, ADDIE, case based library, technology integration

Procedia PDF Downloads 457
25619 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 49
25618 Evolution of Nettlespurge Oil Mud for Drilling Mud System: A Comparative Study of Diesel Oil and Nettlespurge Oil as Oil-Based Drilling Mud

Authors: Harsh Agarwal, Pratikkumar Patel, Maharshi Pathak

Abstract:

Recently the low prices of Crude oil and increase in strict environmental regulations limit limits the use of diesel based muds as these muds are relatively costlier and toxic, as a result disposal of cuttings into the eco-system is a major issue faced by the drilling industries. To overcome these issues faced by the Oil Industry, an attempt has been made to develop oil-in-water emulsion mud system using nettlespurge oil. Nettlespurge oil could be easily available and its cost is around ₹30/litre which is about half the price of diesel in India. Oil-based mud (OBM) was formulated with Nettlespurge oil extracted from Nettlespurge seeds using the Soxhlet extraction method. The formulated nettlespurge oil mud properties were analysed with diesel oil mud properties. The compared properties were rheological properties, yield point and gel strength, and mud density and filtration loss properties, fluid loss and filter cake. The mud density measurement showed that nettlespurge OBM was slightly higher than diesel OBM with mud density values of 9.175 lb/gal and 8.5 lb/gal, respectively, at barite content of 70 g. Thus it has a higher lubricating property. Additionally, the filtration loss test results showed that nettlespurge mud fluid loss volumes, oil was 11 ml, compared to diesel oil mud volume of 15 ml. The filtration loss test indicated that the nettlespurge oil mud with filter cake thickness of 2.2 mm had a cake characteristic of thin and squashy while the diesel oil mud resulted in filter cake thickness of 2.7 mm with cake characteristic of tenacious, rubbery and resilient. The filtration loss test results showed that nettlespurge oil mud fluid loss volumes was much less than the diesel based oil mud. The filtration loss test indicated that the nettlespurge oil mud filter cake thickness less than the diesel oil mud filter cake thickness. So Low formation damage and the emulsion stability effect was analysed with this experiment. The nettlespurge oil-in-water mud system had lower coefficient of friction than the diesel oil based mud system. All the rheological properties have shown better results relative to the diesel based oil mud. Therefore, with all the above mentioned factors and with the data of the conducted experiment we could conclude that the Nettlespurge oil based mud is economically and well as eco-logically much more feasible than the worn out and shabby diesel-based oil mud in the Drilling Industry.

Keywords: economical feasible, ecological feasible, emulsion stability, nettle spurge oil, rheological properties, soxhlet extraction method

Procedia PDF Downloads 188
25617 Satellite-Based Drought Monitoring in Korea: Methodologies and Merits

Authors: Joo-Heon Lee, Seo-Yeon Park, Chanyang Sur, Ho-Won Jang

Abstract:

Satellite-based remote sensing technique has been widely used in the area of drought and environmental monitoring to overcome the weakness of in-situ based monitoring. There are many advantages of remote sensing for drought watch in terms of data accessibility, monitoring resolution and types of available hydro-meteorological data including environmental areas. This study was focused on the applicability of drought monitoring based on satellite imageries by applying to the historical drought events, which had a huge impact on meteorological, agricultural, and hydrological drought. Satellite-based drought indices, the Standardized Precipitation Index (SPI) using Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Mission (GPM); Vegetation Health Index (VHI) using MODIS based Land Surface Temperature (LST), and Normalized Difference Vegetation Index (NDVI); and Scaled Drought Condition Index (SDCI) were evaluated to assess its capability to analyze the complex topography of the Korean peninsula. While the VHI was accurate when capturing moderate drought conditions in agricultural drought-damaged areas, the SDCI was relatively well monitored in hydrological drought-damaged areas. In addition, this study found correlations among various drought indices and applicability using Receiver Operating Characteristic (ROC) method, which will expand our understanding of the relationships between hydro-meteorological variables and drought events at global scale. The results of this research are expected to assist decision makers in taking timely and appropriate action in order to save millions of lives in drought-damaged areas.

Keywords: drought monitoring, moderate resolution imaging spectroradiometer (MODIS), remote sensing, receiver operating characteristic (ROC)

Procedia PDF Downloads 314
25616 Design of Bayesian MDS Sampling Plan Based on the Process Capability Index

Authors: Davood Shishebori, Mohammad Saber Fallah Nezhad, Sina Seifi

Abstract:

In this paper, a variable multiple dependent state (MDS) sampling plan is developed based on the process capability index using Bayesian approach. The optimal parameters of the developed sampling plan with respect to constraints related to the risk of consumer and producer are presented. Two comparison studies have been done. First, the methods of double sampling model, sampling plan for resubmitted lots and repetitive group sampling (RGS) plan are elaborated and average sample numbers of the developed MDS plan and other classical methods are compared. A comparison study between the developed MDS plan based on Bayesian approach and the exact probability distribution is carried out.

Keywords: MDS sampling plan, RGS plan, sampling plan for resubmitted lots, process capability index (PCI), average sample number (ASN), Bayesian approach

Procedia PDF Downloads 283