Search results for: forum selection clause
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2448

Search results for: forum selection clause

2118 Regional Variations in Spouse Selection Patterns of Women in India

Authors: Nivedita Paul

Abstract:

Marriages in India are part and parcel of kinship and cultural practices. Marriage practices differ in India because of cross-regional diversities in social relations which itself has evolved as a result of causal relationship between space and culture. As the place is important for the formation of culture and other social structures, therefore there is regional differentiation in cultural practices and marital customs. Based on the cultural practices some scholars have divided India into North and South kinship regions where women in the North get married early and have lesser autonomy compared to women in the South where marriages are mostly consanguineous. But, the emergence of new modes and alternative strategies such as matrimonial advertisements becoming popular, as well as the increase in women’s literacy and work force participation, matchmaking process in India has changed to some extent. The present study uses data from Indian Human Development Survey II (2011-12) which is a nationally representative multitopic survey that covers 41,554 households. Currently married women of age group 15-49 in their first marriage; whose year of marriage is from the 1970s to 2000s have been taken for the study. Based on spouse selection experiences, the sample of women has been divided into three marriage categories-self, semi and family arranged. Women in self-arranged or love marriage is the sole decision maker in choosing the partner, in semi-arranged marriage or arranged marriage with consent both parents and women together take the decision, whereas in family arranged or arranged marriage without consent only parents take the decision. The main aim of the study is to show the spatial and regional variations in spouse selection decision making. The basis for regionalization has been taken from Irawati Karve’s pioneering work on kinship studies in India called Kinship Organization in India. India is divided into four kinship regions-North, Central, South and East. Since this work was formulated in 1953, some of the states have experienced changes due to modernization; hence these have been regrouped. After mapping spouse selection patterns using GIS software, it is found that the northern region has mostly family arranged marriages (around 64.6%), the central zone shows a mixed pattern since family arranged marriages are less than north but more than south and semi-arranged marriages are more than north but less than south. The southern zone has the dominance of semi-arranged marriages (around 55%) whereas the eastern zone has more of semi-arranged marriage (around 53%) but there is also a high percentage of self-arranged marriage (around 42%). Thus, arranged marriage is the dominant form of marriage in all four regions, but with a difference in the degree of the involvement of the female and her parents and relatives.

Keywords: spouse selection, consent, kinship, regional pattern

Procedia PDF Downloads 145
2117 Faults Diagnosis by Thresholding and Decision tree with Neuro-Fuzzy System

Authors: Y. Kourd, D. Lefebvre

Abstract:

The monitoring of industrial processes is required to ensure operating conditions of industrial systems through automatic detection and isolation of faults. This paper proposes a method of fault diagnosis based on a neuro-fuzzy hybrid structure. This hybrid structure combines the selection of threshold and decision tree. The validation of this method is obtained with the DAMADICS benchmark. In the first phase of the method, a model will be constructed that represents the normal state of the system to fault detection. Signatures of the faults are obtained with residuals analysis and selection of appropriate thresholds. These signatures provide groups of non-separable faults. In the second phase, we build faulty models to see the flaws in the system that cannot be isolated in the first phase. In the latest phase we construct the tree that isolates these faults.

Keywords: decision tree, residuals analysis, ANFIS, fault diagnosis

Procedia PDF Downloads 606
2116 Policy of Tourism and Opportunities of Development of Wellness Industry in Georgia

Authors: G. Erkomaishvili, R. Gvelesiani, E. Kharaishvili, M. Chavleishvili

Abstract:

The topic reviews the situation existing currently in Georgia in the field of tourism in conditions of globalization: Touristic resources, the paces of development of the tourism infrastructure, tourism policy, possibilities of development of the Wellness industry in Georgia that is the newest direction of the medical tourism. The factors impeding the development of the industry of tourism, namely-existence of the conflict zones, high rates of the bank credits, deficiencies associated with the tax laws, a level of infrastructural development, quality of services, deficit in the competitive staff, increase of prices in the peak seasons, insufficient promotion of the touristic opportunities of Georgia on the international markets are studied and analyzed. Besides, the levels of development of tourism in Georgia according to the World Economic Forum, aspects of cooperation with the European Union etc. are reviewed. As a result of these studies, a strategy of development of tourism and one of its directions-Wellness industries in Georgia is introduced with the relevant conclusions, on which basis the recommendations are provided.

Keywords: about tourism, tourism policy, wellness industry, business, innovation, technology

Procedia PDF Downloads 492
2115 The Acquisition of Case in Biological Domain Based on Text Mining

Authors: Shen Jian, Hu Jie, Qi Jin, Liu Wei Jie, Chen Ji Yi, Peng Ying Hong

Abstract:

In order to settle the problem of acquiring case in biological related to design problems, a biometrics instance acquisition method based on text mining is presented. Through the construction of corpus text vector space and knowledge mining, the feature selection, similarity measure and case retrieval method of text in the field of biology are studied. First, we establish a vector space model of the corpus in the biological field and complete the preprocessing steps. Then, the corpus is retrieved by using the vector space model combined with the functional keywords to obtain the biological domain examples related to the design problems. Finally, we verify the validity of this method by taking the example of text.

Keywords: text mining, vector space model, feature selection, biologically inspired design

Procedia PDF Downloads 233
2114 Exploring the Importance of Different Product Cues on the Selection for Chocolate from the Consumer Perspective

Authors: Ezeni Brzovska, Durdana Ozretic-Dosen

Abstract:

The purpose of this paper is to deepen the understanding of the product cues that influence purchase decision for a specific product category – chocolate, and to identify demographic differences in the buying behavior. ANOVA was employed for analyzing the significance level for nine product cues, and the survey showed statistically significant differences among different age and gender groups, and between respondents with different levels of education. From the theoretical perspective, the study adds to the existing knowledge by contributing with the research results from the new environment (Southeast Europe, Macedonia), which has been neglected so far. Establishing the level of significance for the product cues that affect buying behavior in the chocolate consumption context might help managers to improve marketing decision-making, and better meet consumer needs through identifying opportunities for packaging innovations and/or personalization toward different target groups.

Keywords: chocolate consumption context, chocolate selection, demographic characteristics, product cues

Procedia PDF Downloads 229
2113 Supply Chain Risk Management (SCRM): A Simplified Alternative for Implementing SCRM for Small and Medium Enterprises

Authors: Paul W. Murray, Marco Barajas

Abstract:

Recent changes in supply chains, especially globalization and collaboration, have created new risks for enterprises of all sizes. A variety of complex frameworks, often based on enterprise risk management strategies have been presented under the heading of Supply Chain Risk Management (SCRM). The literature on promotes the benefits of a robust SCRM strategy; however, implementing SCRM is difficult and resource demanding for Large Enterprises (LEs), and essentially out of reach for Small and Medium Enterprises (SMEs). This research debunks the idea that SCRM is necessary for all enterprises and instead proposes a simple and effective Vendor Selection Template (VST). Empirical testing and a survey of supply chain practitioners provide a measure of validation to the VST. The resulting VSTis a valuable contribution because is easy to use, provides practical results, and is sufficiently flexible to be universally applied to SMEs.

Keywords: multiple regression analysis, supply chain management, risk assessment, vendor selection

Procedia PDF Downloads 439
2112 Optical Variability of Faint Quasars

Authors: Kassa Endalamaw Rewnu

Abstract:

The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.

Keywords: nuclear activity, galaxies, active quasars, variability

Procedia PDF Downloads 50
2111 A Sustainable Training and Feedback Model for Developing the Teaching Capabilities of Sessional Academic Staff

Authors: Nirmani Wijenayake, Louise Lutze-Mann, Lucy Jo, John Wilson, Vivian Yeung, Dean Lovett, Kim Snepvangers

Abstract:

Sessional academic staff at universities have the most influence and impact on student learning, engagement, and experience as they have the most direct contact with undergraduate students. A blended technology-enhanced program was created for the development and support of sessional staff to ensure adequate training is provided to deliver quality educational outcomes for the students. This program combines innovative mixed media educational modules, a peer-driven support forum, and face-to-face workshops to provide a comprehensive training and support package for staff. Additionally, the program encourages the development of learning communities and peer mentoring among the sessional staff to enhance their support system. In 2018, the program was piloted on 100 sessional staff in the School of Biotechnology and Biomolecular Sciences to evaluate the effectiveness of this model. As part of the program, rotoscope animations were developed to showcase ‘typical’ interactions between staff and students. These were designed around communication, confidence building, consistency in grading, feedback, diversity awareness, and mental health and wellbeing. When surveyed, 86% of sessional staff found these animations to be helpful in their teaching. An online platform (Moodle) was set up to disseminate educational resources and teaching tips, to host a discussion forum for peer-to-peer communication and to increase critical thinking and problem-solving skills through scenario-based lessons. The learning analytics from these lessons were essential in identifying difficulties faced by sessional staff to further develop supporting workshops to improve outcomes related to teaching. The face-to-face professional development workshops were run by expert guest speakers on topics such as cultural diversity, stress and anxiety, LGBTIQ and student engagement. All the attendees of the workshops found them to be useful and 88% said they felt these workshops increase interaction with their peers and built a sense of community. The final component of the program was to use an adaptive e-learning platform to gather feedback from the students on sessional staff teaching twice during the semester. The initial feedback provides sessional staff with enough time to reflect on their teaching and adjust their performance if necessary, to improve the student experience. The feedback from students and the sessional staff on this model has been extremely positive. The training equips the sessional staff with knowledge and insights which can provide students with an exceptional learning environment. This program is designed in a flexible and scalable manner so that other faculties or institutions could adapt components for their own training. It is anticipated that the training and support would help to build the next generation of educators who will directly impact the educational experience of students.

Keywords: designing effective instruction, enhancing student learning, implementing effective strategies, professional development

Procedia PDF Downloads 103
2110 The Board Structure of Public and Private Sector Companies and Its Impact on Firm Performance: A Study of Fortune 500 Indian Companies from 2006 to 2015

Authors: Gayathri P. Nair

Abstract:

The focus of this study is to identify whether the board structure has any significant impact on the firm performance and finding out any evidence of being listed in the Fortune 500 list compiled and published by the American business magazine, Fortune and published globally by Time Inc., as the world’s wealthiest companies. The list has been released based on the ranking obtained for the total revenues for the respective fiscal year which has ended on or before March 31st. The study has been conducted on the Indian companies that were listed in the Fortune 500 list for the past 10 years. This study employs a logical regression between the variables, firm performance and board composition as mentioned in the clause 49 of companies act 1956 and 2013. For getting the firm performance, ROA has selected as the key performance metric, as it focuses the management attention on the assets required to run the business. The highlight of the study is that the tools had been applied between public and private sector firms so that, it reveals whether the board composition is helping out to maintain the position in the list. In addition, the findings reveal that apart from independent directors, all other variables have significant impact on firm performance.

Keywords: board structure, Fortune 500 company, firm performance, India

Procedia PDF Downloads 213
2109 A Framework for Evaluating the QoS and Cost of Web Services Based on Its Functional Performance

Authors: M. Mohemmed Sha, T. Manesh, A. Ahmed Mohamed Mustaq

Abstract:

In this corporate world, the technology of Web services has grown rapidly and its significance for the development of web based applications gradually rises over time. The success of Business to Business integration rely on finding novel partners and their services in a global business environment. But the selection of the most suitable Web service from the list of services with the identical functionality is more vital. The satisfaction level of the customer and the provider’s reputation of the Web service are primarily depending on the range it reaches the customer’s requirements. In most cases the customer of the Web service feels that he is spending for the service which is undelivered. This is because the customer always thinks that the real functionality of the web service is not reached. This will lead to change of the service frequently. In this paper, a framework is proposed to evaluate the Quality of Service (QoS) and its cost that makes the optimal correlation between each other. Also this research work proposes some management decision against the functional deviancy of the web service that are guaranteed at time of selection.

Keywords: web service, service level agreement, quality of a service, cost of a service, QoS, CoS, SOA, WSLA, WsRF

Procedia PDF Downloads 389
2108 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: dimensionality reduction, hyperspectral image, semantic interpretation, spatial hypergraph

Procedia PDF Downloads 288
2107 Multi-Objective Discrete Optimization of External Thermal Insulation Composite Systems in Terms of Thermal and Embodied Energy Performance

Authors: Berfin Yildiz

Abstract:

These days, increasing global warming effects, limited amount of energy resources, etc., necessitates the awareness that must be present in every profession group. The architecture and construction sectors are responsible for both the embodied and operational energy of the materials. This responsibility has led designers to seek alternative solutions for energy-efficient material selection. The choice of energy-efficient material requires consideration of the entire life cycle, including the building's production, use, and disposal energy. The aim of this study is to investigate the method of material selection of external thermal insulation composite systems (ETICS). Embodied and in-use energy values of material alternatives were used for the evaluation in this study. The operational energy is calculated according to the u-value calculation method defined in the TS 825 (Thermal Insulation Requirements) standard for Turkey, and the embodied energy is calculated based on the manufacturer's Energy Performance Declaration (EPD). ETICS consists of a wall, adhesive, insulation, lining, mechanical, mesh, and exterior finishing materials. In this study, lining, mechanical, and mesh materials were ignored because EPD documents could not be obtained. The material selection problem is designed as a hypothetical volume area (5x5x3m) and defined as a multi-objective discrete optimization problem for external thermal insulation composite systems. Defining the problem as a discrete optimization problem is important in order to choose between materials of various thicknesses and sizes. Since production and use energy values, which are determined as optimization objectives in the study, are often conflicting values, material selection is defined as a multi-objective optimization problem, and it is aimed to obtain many solution alternatives by using Hypervolume (HypE) algorithm. The enrollment process started with 100 individuals and continued for 50 generations. According to the obtained results, it was observed that autoclaved aerated concrete and Ponce block as wall material, glass wool, as insulation material gave better results.

Keywords: embodied energy, multi-objective discrete optimization, performative design, thermal insulation

Procedia PDF Downloads 110
2106 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 65
2105 Diffusion Adaptation Strategies for Distributed Estimation Based on the Family of Affine Projection Algorithms

Authors: Mohammad Shams Esfand Abadi, Mohammad Ranjbar, Reza Ebrahimpour

Abstract:

This work presents the distributed processing solution problem in a diffusion network based on the adapt then combine (ATC) and combine then adapt (CTA)selective partial update normalized least mean squares (SPU-NLMS) algorithms. Also, we extend this approach to dynamic selection affine projection algorithm (DS-APA) and ATC-DS-APA and CTA-DS-APA are established. The purpose of ATC-SPU-NLMS and CTA-SPU-NLMS algorithm is to reduce the computational complexity by updating the selected blocks of weight coefficients at every iteration. In CTA-DS-APA and ATC-DS-APA, the number of the input vectors is selected dynamically. Diffusion cooperation strategies have been shown to provide good performance based on these algorithms. The good performance of introduced algorithm is illustrated with various experimental results.

Keywords: selective partial update, affine projection, dynamic selection, diffusion, adaptive distributed networks

Procedia PDF Downloads 678
2104 Method for Selecting and Prioritising Smart Services in Manufacturing Companies

Authors: Till Gramberg, Max Kellner, Erwin Gross

Abstract:

This paper presents a comprehensive investigation into the topic of smart services and IIoT-Platforms, focusing on their selection and prioritization in manufacturing organizations. First, a literature review is conducted to provide a basic understanding of the current state of research in the area of smart services. Based on discussed and established definitions, a definition approach for this paper is developed. In addition, value propositions for smart services are identified based on the literature and expert interviews. Furthermore, the general requirements for the provision of smart services are presented. Subsequently, existing approaches for the selection and development of smart services are identified and described. In order to determine the requirements for the selection of smart services, expert opinions from successful companies that have already implemented smart services are collected through semi-structured interviews. Based on the results, criteria for the evaluation of existing methods are derived. The existing methods are then evaluated according to the identified criteria. Furthermore, a novel method for the selection of smart services in manufacturing companies is developed, taking into account the identified criteria and the existing approaches. The developed concept for the method is verified in expert interviews. The method includes a collection of relevant smart services identified in the literature. The actual relevance of the use cases in the industrial environment was validated in an online survey. The required data and sensors are assigned to the smart service use cases. The value proposition of the use cases is evaluated in an expert workshop using different indicators. Based on this, a comparison is made between the identified value proposition and the required data, leading to a prioritization process. The prioritization process follows an established procedure for evaluating technical decision-making processes. In addition to the technical requirements, the prioritization process includes other evaluation criteria such as the economic benefit, the conformity of the new service offering with the company strategy, or the customer retention enabled by the smart service. Finally, the method is applied and validated in an industrial environment. The results of these experiments are critically reflected upon and an outlook on future developments in the area of smart services is given. This research contributes to a deeper understanding of the selection and prioritization process as well as the technical considerations associated with smart service implementation in manufacturing organizations. The proposed method serves as a valuable guide for decision makers, helping them to effectively select the most appropriate smart services for their specific organizational needs.

Keywords: smart services, IIoT, industrie 4.0, IIoT-platform, big data

Procedia PDF Downloads 55
2103 Relay Node Selection Algorithm for Cooperative Communications in Wireless Networks

Authors: Sunmyeng Kim

Abstract:

IEEE 802.11a/b/g standards support multiple transmission rates. Even though the use of multiple transmission rates increase the WLAN capacity, this feature leads to the performance anomaly problem. Cooperative communication was introduced to relieve the performance anomaly problem. Data packets are delivered to the destination much faster through a relay node with high rate than through direct transmission to the destination at low rate. In the legacy cooperative protocols, a source node chooses a relay node only based on the transmission rate. Therefore, they are not so feasible in multi-flow environments since they do not consider the effect of other flows. To alleviate the effect, we propose a new relay node selection algorithm based on the transmission rate and channel contention level. Performance evaluation is conducted using simulation, and shows that the proposed protocol significantly outperforms the previous protocol in terms of throughput and delay.

Keywords: cooperative communications, MAC protocol, relay node, WLAN

Procedia PDF Downloads 312
2102 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 129
2101 Environmental Quality On-Line Monitoring Based on Enterprises Resource Planning on Implementation ISO 14001:2004

Authors: Ahmad Badawi Saluy

Abstract:

This study aims to develop strategies for the prevention or elimination of environmental pollution as well as changes in external variables of the environment in order to implement the environmental management system ISO 14001:2004 by integrating analysis of environmental issues data, RKL-RPL transactional data and regulation as part of ERP on the management dashboard. This research uses a quantitative descriptive approach with analysis method comparing with air quality standard (PP 42/1999, LH 21/2008), water quality standard (permenkes RI 416/1990, KepmenLH 51/2004, kepmenLH 55/2013 ), and biodiversity indicators. Based on the research, the parameters of RPL monitoring have been identified, among others, the quality of emission air (SO₂, NO₂, dust, particulate) due to the influence of fuel quality, combustion performance in a combustor and the effect of development change around the generating area. While in water quality (TSS, TDS) there was an increase due to the flow of water in the cooling intake carrying sedimentation from the flow of Banjir Kanal Timur. Including compliance with the ISO 14001:2004 clause on application design significantly contributes to the improvement of the quality of power plant management.

Keywords: environmental management systems, power plant management, regulatory compliance , enterprises resource planning

Procedia PDF Downloads 157
2100 Performance and Emission Prediction in a Biodiesel Engine Fuelled with Honge Methyl Ester Using RBF Neural Networks

Authors: Shiva Kumar, G. S. Vijay, Srinivas Pai P., Shrinivasa Rao B. R.

Abstract:

In the present study RBF neural networks were used for predicting the performance and emission parameters of a biodiesel engine. Engine experiments were carried out in a 4 stroke diesel engine using blends of diesel and Honge methyl ester as the fuel. Performance parameters like BTE, BSEC, Tech and emissions from the engine were measured. These experimental results were used for ANN modeling. RBF center initialization was done by random selection and by using Clustered techniques. Network was trained by using fixed and varying widths for the RBF units. It was observed that RBF results were having a good agreement with the experimental results. Networks trained by using clustering technique gave better results than using random selection of centers in terms of reduced MRE and increased prediction accuracy. The average MRE for the performance parameters was 3.25% with the prediction accuracy of 98% and for emissions it was 10.4% with a prediction accuracy of 80%.

Keywords: radial basis function networks, emissions, performance parameters, fuzzy c means

Procedia PDF Downloads 534
2099 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 113
2098 Technology Identification, Evaluation and Selection Methodology for Industrial Process Water and Waste Water Treatment Plant of 3x150 MWe Tufanbeyli Lignite-Fired Power Plant

Authors: Cigdem Safak Saglam

Abstract:

Most thermal power plants use steam as working fluid in their power cycle. Therefore, in addition to fuel, water is the other main input for thermal plants. Water and steam must be highly pure in order to protect the systems from corrosion, scaling and biofouling. Pure process water is produced in water treatment plants having many several treatment methods. Treatment plant design is selected depending on raw water source and required water quality. Although working principle of fossil-fuel fired thermal power plants are same, there is no standard design and equipment arrangement valid for all thermal power plant utility systems. Besides that, there are many other technology evaluation and selection criteria for designing the most optimal water systems meeting the requirements such as local conditions, environmental restrictions, electricity and other consumables availability and transport, process water sources and scarcity, land use constraints etc. Aim of this study is explaining the adopted methodology for technology selection for process water preparation and industrial waste water treatment plant in a thermal power plant project located in Tufanbeyli, Adana Province in Turkey. Thermal power plant is fired with indigenous lignite coal extracted from adjacent lignite reserves. This paper addresses all above-mentioned factors affecting the thermal power plant water treatment facilities (demineralization + waste water treatment) design and describes the ultimate design of Tufanbeyli Thermal Power Plant Water Treatment Plant.

Keywords: thermal power plant, lignite coal, pretreatment, demineralization, electrodialysis, recycling, ash dampening

Procedia PDF Downloads 459
2097 Variant Selection and Pre-transformation Phase Reconstruction for Deformation-Induced Transformation in AISI 304 Austenitic Stainless Steel

Authors: Manendra Singh Parihar, Sandip Ghosh Chowdhury

Abstract:

Austenitic stainless steels are widely used and give a good combination of properties. When this steel is plastically deformed, a phase transformation of the metastable Face Centred Cubic Austenite to the stable Body Centred Cubic (α’) or to the Hexagonal close packed (ԑ) martensite may occur, leading to the enhancement in the mechanical properties like strength. The work was based on variant selection and corresponding texture analysis for the strain induced martensitic transformation during deformation of the parent austenite FCC phase to form the product HCP and the BCC martensite phases separately, obeying their respective orientation relationships. The automated method for reconstruction of the parent phase orientation using the EBSD data of the product phase orientation is done using the MATLAB and TSL-OIM software. The method of triplets was used which involves the formation of a triplet of neighboring product grains having a common variant and linking them using a misorientation-based criterion. This led to the proper reconstruction of the pre-transformation phase orientation data and thus to its micro structure and texture. The computational speed of current method is better compared to the previously used methods of reconstruction. The reconstruction of austenite from ԑ and α’ martensite was carried out for multiple samples and their IPF images, pole figures, inverse pole figures and ODFs were compared. Similar type of results was observed for all samples. The comparison gives the idea for estimating the correct sequence of the transformation i.e. γ → ε → α’ or γ → α’, during deformation of AISI 304 austenitic stainless steel.

Keywords: variant selection, reconstruction, EBSD, austenitic stainless steel, martensitic transformation

Procedia PDF Downloads 471
2096 The Effect of Program Type on Mutation Testing: Comparative Study

Authors: B. Falah, N. E. Abakouy

Abstract:

Due to its high computational cost, mutation testing has been neglected by researchers. Recently, many cost and mutants’ reduction techniques have been developed, improved, and experimented, but few of them has relied the possibility of reducing the cost of mutation testing on the program type of the application under test. This paper is a comparative study between four operators’ selection techniques (mutants sampling, class level operators, method level operators, and all operators’ selection) based on the program code type of each application under test. It aims at finding an alternative approach to reveal the effect of code type on mutation testing score. The result of our experiment shows that the program code type can affect the mutation score and that the programs using polymorphism are best suited to be tested with mutation testing.

Keywords: equivalent mutant, killed mutant, mutation score, mutation testing, program code type, software testing

Procedia PDF Downloads 529
2095 A Hybrid System for Boreholes Soil Sample

Authors: Ali Ulvi Uzer

Abstract:

Data reduction is an important topic in the field of pattern recognition applications. The basic concept is the reduction of multitudinous amounts of data down to the meaningful parts. The Principal Component Analysis (PCA) method is frequently used for data reduction. The Support Vector Machine (SVM) method is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data, the algorithm outputs an optimal hyperplane which categorizes new examples. This study offers a hybrid approach that uses the PCA for data reduction and Support Vector Machines (SVM) for classification. In order to detect the accuracy of the suggested system, two boreholes taken from the soil sample was used. The classification accuracies for this dataset were obtained through using ten-fold cross-validation method. As the results suggest, this system, which is performed through size reduction, is a feasible system for faster recognition of dataset so our study result appears to be very promising.

Keywords: feature selection, sequential forward selection, support vector machines, soil sample

Procedia PDF Downloads 430
2094 Functional Mortality of Anopheles stephensi, the Urban Malaria Vector as Induced by the Sublethal Exposure to Deltamethrin

Authors: P. Aarumugam, N. Krishnamoorthy, K. Gunasekaran

Abstract:

The mosquitoes with loss of minimum three legs especially the hind legs have the negative impact on the survival hood of mosquitoes. Three days old unfed adult female laboratory strain was selected in each generation against sublethal dosages (0.004%, 0.005%, 0.007% and 0.01%) of deltamethrin upto 40 generations. Impregnated papers with acetone were used for control. Every fourth generation, survived mosquitoes were observed for functional mortality. Hind legs lost were significantly (P< 0.05) higher in treated than the controls up to generation 24, thereafter no significant lost. In contrary, no significant forelegs lost among exposed mosquitoes. Middle legs lost were also not significant in the exposed mosquitoes except first generation (F1). The field strain (Chennai) did not show any significant loss of legs (fore or mid or hind) compared to the control. The selection pressure on mosquito population influences strong natural selection to develop various adaptive mechanisms.

Keywords: Anopheles stephensi, deltamethrin, functional mortality, synthetic pyrethroids

Procedia PDF Downloads 372
2093 Applying Multiple Intelligences to Teach Buddhist Doctrines in a Classroom

Authors: Phalaunnnaphat Siriwongs

Abstract:

The classroom of the 21st century is an ever changing forum for new and innovative thoughts and ideas. With increasing technology and opportunity, students have rapid access to information that only decades ago would have taken weeks to obtain. Unfortunately, new techniques and technology are not the cure for the fundamental problems that have plagued the classroom ever since education was established. Class size has been an issue long debated in academia. While it is difficult to pin point an exact number, it is clear that in this case more does not mean better. By looking into the success and pitfalls of classroom size the true advantages of smaller classes will become clear. Previously, one class was comprised of 50 students. Being seventeen and eighteen- year- old students, sometimes it was quite difficult for them to stay focused. To help them understand and gain much knowledge, a researcher introduced “The Theory of Multiple Intelligence” and this, in fact, enabled students to learn according to their own learning preferences no matter how they were being taught. In this lesson, the researcher designed a cycle of learning activities involving all intelligences so that everyone had equal opportunities to learn.

Keywords: multiple intelligences, role play, performance assessment, formative assessment

Procedia PDF Downloads 257
2092 Selection Criteria in the Spanish Secondary Education Content and Language Integrated Learning (CLIL) Programmes and Their Effect on Code-Switching in CLIL Methodology

Authors: Dembele Dembele, Philippe

Abstract:

Several Second Language Acquisition (SLA) studies have stressed the benefits of Content and Language Integrated Learning (CLIL) and shown how CLIL students outperformed their non-CLIL counterparts in many L2 skills. However, numerous experimental CLIL programs seem to have mainly targeted above-average and rather highly motivated language learners. The need to understand the impact of the student’s language proficiency on code-switching in CLIL instruction motivated this study. Therefore, determining the implications of the students’ low-language proficiency for CLIL methodology, as well as the frequency with which CLIL teachers use the main pedagogical functions of code-switching, seemed crucial for a Spanish CLIL instruction on a large scale. In the mixed-method approach adopted, ten face-to-face interviews were conducted in nine Valencian public secondary education schools, while over 30 CLIL teachers also contributed with their experience in two online survey questionnaires. The results showed the crucial role language proficiency plays in the Valencian CLIL/Plurilingual selection criteria. The presence of a substantial number of low-language proficient students in CLIL groups, which in turn implied important methodological consequences, was another finding of the study. Indeed, though the pedagogical use of L1 was confirmed as an extended practice among CLIL teachers, more than half of the participants perceived that code-switching impaired attaining their CLIL lesson objectives. Therein, the dissertation highlights the need for more extensive empirical research on how code-switching could prove beneficial in CLIL instruction involving low-language proficient students while maintaining the maximum possible exposure to the target language.

Keywords: CLIL methodology, low language proficiency, code switching, selection criteria, code-switching functions

Procedia PDF Downloads 47
2091 Using Greywolf Optimized Machine Learning Algorithms to Improve Accuracy for Predicting Hospital Readmission for Diabetes

Authors: Vincent Liu

Abstract:

Machine learning algorithms (ML) can achieve high accuracy in predicting outcomes compared to classical models. Metaheuristic, nature-inspired algorithms can enhance traditional ML algorithms by optimizing them such as by performing feature selection. We compare ten ML algorithms to predict 30-day hospital readmission rates for diabetes patients in the US using a dataset from UCI Machine Learning Repository with feature selection performed by Greywolf nature-inspired algorithm. The baseline accuracy for the initial random forest model was 65%. After performing feature engineering, SMOTE for class balancing, and Greywolf optimization, the machine learning algorithms showed better metrics, including F1 scores, accuracy, and confusion matrix with improvements ranging in 10%-30%, and a best model of XGBoost with an accuracy of 95%. Applying machine learning this way can improve patient outcomes as unnecessary rehospitalizations can be prevented by focusing on patients that are at a higher risk of readmission.

Keywords: diabetes, machine learning, 30-day readmission, metaheuristic

Procedia PDF Downloads 32
2090 Optimizing Design Parameters for Efficient Saturated Steam Production in Fire Tube Boilers: A Cost-Effective Approach

Authors: Yoftahe Nigussie Worku

Abstract:

This research focuses on advancing fire tube boiler technology by systematically optimizing design parameters to achieve efficient saturated steam production. The main objective is to design a high-performance boiler with a production capacity of 2000kg/h at a 12-bar design pressure while minimizing costs. The methodology employs iterative analysis, utilizing relevant formulas, and considers material selection and production methods. The study successfully results in a boiler operating at 85.25% efficiency, with a fuel consumption rate of 140.37kg/hr and a heat output of 1610kW. Theoretical importance lies in balancing efficiency, safety considerations, and cost minimization. The research addresses key questions on parameter optimization, material choices, and safety-efficiency balance, contributing valuable insights to fire tube boiler design.

Keywords: safety consideration, efficiency, production methods, material selection

Procedia PDF Downloads 41
2089 Novel Bioinspired Design to Capture Smoky CO2 by Reactive Absorption with Aqueous Scrubber

Authors: J. E. O. Hernandez

Abstract:

In the next 20 years, energy production by burning fuels will increase and so will the atmospheric concentration of CO2 and its well-known threats to life on Earth. The technologies available for capturing CO2 are still dubious and this keeps fostering an interest in bio-inspired approaches. The leading one is the application of carbonic anhydrase (CA) –a superfast biocatalyst able to convert up to one million molecules of CO2 into carbonates in water. However, natural CA underperforms when applied to real smoky CO2 in chimneys and, so far, the efforts to create superior CAs in the lab rely on screening methods running under pristine conditions at the micro level, which are far from resembling those in chimneys. For the evolution of man-made enzymes, selection rather than screening would be ideal but this is challenging because of the need for a suitable artificial environment that is also sustainable for our society. Herein we present the stepwise design and construction of a bioprocess (from bench-scale to semi-pilot) for evolutionary selection experiments. In this bioprocess, reaction and adsorption took place simultaneously at atmospheric pressure in a spray tower. The scrubbing solution was fed countercurrently by reusing municipal pressure and it was mainly prepared with water, carbonic anhydrase and calcium chloride. This bioprocess allowed for the enzymatic carbonation of smoky CO2; the reuse of process water and the recovery of solid carbonates without cooling of smoke, pretreatments, solvent amines and compression of CO2. The average yield of solid carbonates was 0.54 g min-1 or 12-fold the amount produced in serum bottles at lab bench scale. This bioprocess could be used as a tailor-made environment for driving the selection of superior CAs. The bioprocess and its match CA could be sustainably used to reduce global warming by CO2 emissions from exhausts.

Keywords: biological carbon capture and sequestration, carbonic anhydrase, directed evolution, global warming

Procedia PDF Downloads 173