Search results for: performance comparison
10552 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 7210551 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling
Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines
Abstract:
The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances
Procedia PDF Downloads 16110550 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies
Authors: Jaballah Jamil
Abstract:
KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.Keywords: KLD social rating agency, investors' perception, investment decision, financial performance
Procedia PDF Downloads 44310549 Enhancing Employee Innovative Behaviours Through Human Resource Wellbeing Practices
Authors: Jarrod Haar, David Brougham
Abstract:
The present study explores the links between supporting employee well-being and the potential benefits to employee performance. We focus on employee innovative work behaviors (IWBs), which have three stages: (1) development, (2) adoption, and (3) implementation of new ideas and work methods. We explore the role of organizational support focusing on employee well-being via High-Performance Work Systems (HPWS). HPWS are HR practices that are designed to enhance employees’ skills, commitment, and ultimately, productivity. HPWS influence employee performance through building their skills, knowledge, and abilities and there is meta-analytic support for firm-level HPWS influencing firm performance, but less attention towards employee outcomes, especially innovation. We explore HPWS-wellbeing being offered (e.g., EAPs, well-being App, etc.) to capture organizational commitment to employee well-being. Under social exchange theory, workers should reciprocate their firm's offering of HPWS-wellbeing with greater efforts towards IWBs. Further, we explore playful work design as a mediator, which represents employees proactively creating work conditions that foster enjoyment/challenge but don’t require any design change to the job itself. We suggest HPWS-wellbeing can encourage employees to become more playful, and ultimately more innovative. Finally, beyond direct effects, we examine whether these relations are similar by gender and ultimately test a moderated mediation model. Using N=1135 New Zealand employees, we established measures with confirmatory factor analysis (CFA), and all measures had good psychometric properties (α>.80). We controlled for age, tenure, education, and hours worked and analyzed data using the PROCESS macro (version 4.2) specifically model 8 (moderated mediation). We analyzed overall IWB, and then again across the three stages. Overall, we find HPWS-wellbeing is significantly related to overall IWBs and the three stages (development, adoption, and implementation) individually. Similarly, HPWS-wellbeing shapes playful work design and playful work design predicts overall IWBs and the three stages individually. It only partially mediates the effects of HPWS-wellbeing, which retains a significant indirect effect. Moderation effects are supported, with males reporting a more significant effect from HPWS-wellbeing on playful work design but not IWB (or any of the three stages) than females. Females report higher playful work design when HPWS-wellbeing is low, but the effects are reversed when HPWS-wellbeing is high (males higher). Thus, males respond stronger under social exchange theory from HPWS-wellbeing, at least towards expressing playful work design. Finally, evidence of moderated mediation effects is found on overall IWBs and the three stages. Males report a significant indirect effect from HPWS-wellbeing on IWB (through playful work design), while female employees report no significant indirect effect. The benefits of playful work design fully account for their IWBs. The models account for small amounts of variance towards playful work design (12%) but larger for IWBs (26%). The study highlights a gap in the literature on HPWS-wellbeing and provides empirical evidence of their importance towards worker innovation. Further, gendered effects suggest these benefits might not be equal. The findings provide useful insights for organizations around how providing HR practices that support employee well-being are important, although how they work for different genders needs further exploration.Keywords: human resource practices, wellbeing, innovation, playful work design
Procedia PDF Downloads 8410548 Designing the Maturity Model of Smart Digital Transformation through the Foundation Data Method
Authors: Mohammad Reza Fazeli
Abstract:
Nowadays, the fourth industry, known as the digital transformation of industries, is seen as one of the top subjects in the history of structural revolution, which has led to the high-tech and tactical dominance of the organization. In the face of these profits, the undefined and non-transparent nature of the after-effects of investing in digital transformation has hindered many organizations from attempting this area of this industry. One of the important frameworks in the field of understanding digital transformation in all organizations is the maturity model of digital transformation. This model includes two main parts of digital transformation maturity dimensions and digital transformation maturity stages. Mediating factors of digital maturity and organizational performance at the individual (e.g., motivations, attitudes) and at the organizational level (e.g., organizational culture) should be considered. For successful technology adoption processes, organizational development and human resources must go hand in hand and be supported by a sound communication strategy. Maturity models are developed to help organizations by providing broad guidance and a roadmap for improvement. However, as a result of a systematic review of the literature and its analysis, it was observed that none of the 18 maturity models in the field of digital transformation fully meet all the criteria of appropriateness, completeness, clarity, and objectivity. A maturity assessment framework potentially helps systematize assessment processes that create opportunities for change in processes and organizations enabled by digital initiatives and long-term improvements at the project portfolio level. Cultural characteristics reflecting digital culture are not systematically integrated, and specific digital maturity models for the service sector are less clearly presented. It is also clearly evident that research on the maturity of digital transformation as a holistic concept is scarce and needs more attention in future research.Keywords: digital transformation, organizational performance, maturity models, maturity assessment
Procedia PDF Downloads 11310547 Subcontractor Development Practices and Processes: A Conceptual Model for LEED Projects
Authors: Andrea N. Ofori-Boadu
Abstract:
The purpose is to develop a conceptual model of subcontractor development practices and processes that strengthen the integration of subcontractors into construction supply chain systems for improved subcontractor performance on Leadership in Energy and Environmental Design (LEED) certified building projects. The construction management of a LEED project has an important objective of meeting sustainability certification requirements. This is in addition to the typical project management objectives of cost, time, quality, and safety for traditional projects; and, therefore increases the complexity of LEED projects. Considering that construction management organizations rely heavily on subcontractors, poor performance on complex projects such as LEED projects has been largely attributed to the unsatisfactory preparation of subcontractors. Furthermore, the extensive use of unique and non-repetitive short term contracts limits the full integration of subcontractors into construction supply chains and hinders long-term cooperation and benefits that could enhance performance on construction projects. Improved subcontractor development practices are needed to better prepare and manage subcontractors, so that complex objectives can be met or exceeded. While supplier development and supply chain theories and practices for the manufacturing sector have been extensively investigated to address similar challenges, investigations in the construction sector are not that obvious. Consequently, the objective of this research is to investigate effective subcontractor development practices and processes to guide construction management organizations in their development of a strong network of high performing subcontractors. Drawing from foundational supply chain and supplier development theories in the manufacturing sector, a mixed interpretivist and empirical methodology is utilized to assess the body of knowledge within literature for conceptual model development. A self-reporting survey with five-point Likert scale items and open-ended questions is administered to 30 construction professionals to estimate their perceptions of the effectiveness of 37 practices, classified into five subcontractor development categories. Data analysis includes descriptive statistics, weighted means, and t-tests that guide the effectiveness ranking of practices and categories. The results inform the proposed three-phased LEED subcontractor development program model which focuses on preparation, development and implementation, and monitoring. Highly ranked LEED subcontractor pre-qualification, commitment, incentives, evaluation, and feedback practices are perceived as more effective, when compared to practices requiring more direct involvement and linkages between subcontractors and construction management organizations. This is attributed to unfamiliarity, conflicting interests, lack of trust, and resource sharing challenges. With strategic modifications, the recommended practices can be extended to other non-LEED complex projects. Additional research is needed to guide the development of subcontractor development programs that strengthen direct involvement between construction management organizations and their network of high performing subcontractors. Insights from this present research strengthen theoretical foundations to support future research towards more integrated construction supply chains. In the long-term, this would lead to increased performance, profits and client satisfaction.Keywords: construction management, general contractor, supply chain, sustainable construction
Procedia PDF Downloads 11410546 Series Network-Structured Inverse Models of Data Envelopment Analysis: Pitfalls and Solutions
Authors: Zohreh Moghaddas, Morteza Yazdani, Farhad Hosseinzadeh
Abstract:
Nowadays, data envelopment analysis (DEA) models featuring network structures have gained widespread usage for evaluating the performance of production systems and activities (Decision-Making Units (DMUs)) across diverse fields. By examining the relationships between the internal stages of the network, these models offer valuable insights to managers and decision-makers regarding the performance of each stage and its impact on the overall network. To further empower system decision-makers, the inverse data envelopment analysis (IDEA) model has been introduced. This model allows the estimation of crucial information for estimating parameters while keeping the efficiency score unchanged or improved, enabling analysis of the sensitivity of system inputs or outputs according to managers' preferences. This empowers managers to apply their preferences and policies on resources, such as inputs and outputs, and analyze various aspects like production, resource allocation processes, and resource efficiency enhancement within the system. The results obtained can be instrumental in making informed decisions in the future. The top result of this study is an analysis of infeasibility and incorrect estimation that may arise in the theory and application of the inverse model of data envelopment analysis with network structures. By addressing these pitfalls, novel protocols are proposed to circumvent these shortcomings effectively. Subsequently, several theoretical and applied problems are examined and resolved through insightful case studies.Keywords: inverse models of data envelopment analysis, series network, estimation of inputs and outputs, efficiency, resource allocation, sensitivity analysis, infeasibility
Procedia PDF Downloads 5710545 Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance
Authors: Akan C. Offong, E. J. Anthony, Vasilije Manovic
Abstract:
A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.Keywords: carbon dioxide, electro-chemical reduction, ionic liquids, microfluidics, modelling
Procedia PDF Downloads 15010544 Examining Actors’ Self-Concept Clarity, Sociotrophy and Self-Monitoring Levels in Comparison with Their Peers
Authors: Ezgi Cetinkaya
Abstract:
In the psychological literature, there are a few studies that focus on actors' self-perceptions and their social adjustment skills. Therefore the aim of the study was to shed light on the self-concept clarity, sociotrophy, and self-monitoring levels of professional actors. For this purpose, actors and non-actors are compared to their peers. The study was conducted with the participation of 106 actors and 131 non-actors. A descriptive method of research was employed and data was collected through the concept Clarity scale by Campbell et al. (1996), the Pleasing Others and Concern For Disapproval subscales of Sociotrophy and Autonomy scale by Beck et al. (1983), and the Self-Monitoring Scale by Snyder ( 1983). ANOVA and correlation analysis was done by using SPSS. Results showed that there is no significant difference between actors and non-actors at any age in terms of Self Concept Clarity. 25-25 years non-actors were found to have the highest self-concept clarity while the young actors had the lowest. The study didn’t reveal significant differences between the groups in terms of Sociotropy scores. The actor’s sociothropic tendencies weren’t enhanced by the experience. The study demonstrated that 25-35-year-old actors are higher self-monitors than 25-35-year-old non-actors.Keywords: self-concept, self-monitoring, autonomy, sociotropy, theatre, acting, creativity, identity
Procedia PDF Downloads 6810543 X-Ray Crystallographic, Hirshfeld Surface Analysis and Docking Study of Phthalyl Sulfacetamide
Authors: Sanjay M. Tailor, Urmila H. Patel
Abstract:
Phthalyl Sulfacetamide belongs to well-known member of antimicrobial sulfonamide family. It is a potent antitumor drug. Structural characteristics of 4-amino-N-(2quinoxalinyl) benzene-sulfonamides (Phthalyl Sulfacetamide), C14H12N4O2S has been studied by method of X-ray crystallography. The compound crystallizes in monoclinic space group P21/n with unit cell parameters a= 7.9841 Ǻ, b= 12.8208 Ǻ, c= 16.6607 Ǻ, α= 90˚, β= 93.23˚, γ= 90˚and Z=4. The X-ray based three-dimensional structure analysis has been carried out by direct methods and refined to an R-value of 0.0419. The crystal structure is stabilized by intermolecular N-H…N, N-H…O and π-π interactions. The Hirshfeld surfaces and consequently the fingerprint analysis have been performed to study the nature of interactions and their quantitative contributions towards the crystal packing. An analysis of Hirshfeld surfaces and fingerprint plots facilitates a comparison of intermolecular interactions, which are the key elements in building different supramolecular architectures. Docking is used for virtual screening for the prediction of the strongest binders based on various scoring functions. Docking studies are carried out on Phthalyl Sulfacetamide for better activity, which is important for the development of a new class of inhibitors.Keywords: phthalyl sulfacetamide, crystal structure, hirshfeld surface analysis, docking
Procedia PDF Downloads 35210542 The Research of Hand-Grip Strength for Adults with Intellectual Disability
Authors: Haiu-Lan Chin, Yu-Fen Hsiao, Hua-Ying Chuang, Wei Lee
Abstract:
An adult with intellectual disability generally has insufficient physical activity which is an important factor leading to premature weakness. Studies in recent years on frailty syndrome have accumulated substantial data about indicators of human aging, including unintentional weight loss, self-reported exhaustion, weakness, slow walking speed, and low physical activity. Of these indicators, hand-grip strength can be seen as a predictor of mortality, disability, complications, and increased length of hospital stay. Hand-grip strength in fact provides a comprehensive overview of one’s vitality. The research is about the investigation on hand-grip strength of adults with intellectual disabilities in facilities, institutions and workshops. The participants are 197 male adults (M=39.09±12.85 years old), and 114 female ones (M=35.80±8.2 years old) so far. The aim of the study is to figure out the performance of their hand-grip strength, and initiate the setting of training on hand-grip strength in their daily life which will decrease the weakening on their physical condition. Test items include weight, bone density, basal metabolic rate (BMR), static body balance except hand-grip strength. Hand-grip strength was measured by a hand dynamometer and classified as normal group ( ≧ 30 kg for male and ≧ 20 kg for female) and weak group ( < 30 kg for male, < 20 kg for female)The analysis includes descriptive statistics, and the indicators of grip strength fo the adults with intellectual disability. Though the research is still ongoing and the participants are increasing, the data indicates: (1) The correlation between hand-grip strength and degree of the intellectual disability (p ≦. 001), basal metabolic rate (p ≦ .001), and static body balance (p ≦ .01) as well. Nevertheless, there is no significant correlation between grip strength and basal metabolic rate which had been having significant correlation with hand-grip strength. (2) The difference between male and female subjects in hand-grip strength is significant, the hand-grip strength of male subjects (25.70±12.81 Kg) is much higher than female ones (16.30±8.89 Kg). Compared to the female counterparts, male participants indicate greater individual differences. And the proportion of weakness between male and female subjects is also different. (3) The regression indicates the main factors related to grip strength performance include degree of the intellectual disability, height, static body balance, training and weight sequentially. (4) There is significant difference on both hand-grip and static body balance between participants in facilities and workshops. The study supports the truth about the sex and gender differences in health. Nevertheless, the average hand-grip strength of left hand is higher than right hand in both male and female subjects. Moreover, 71.3% of male subjects and 64.2% of female subjects have better performance in their left hand-grip which is distinctive features especially in low degree of the intellectual disability.Keywords: adult with intellectual disability, frailty syndrome, grip strength, physical condition
Procedia PDF Downloads 18310541 Finding the Elastic Field in an Arbitrary Anisotropic Media by Implementing Accurate Generalized Gaussian Quadrature Solution
Authors: Hossein Kabir, Amir Hossein Hassanpour Mati-Kolaie
Abstract:
In the current study, the elastic field in an anisotropic elastic media is determined by implementing a general semi-analytical method. In this specific methodology, the displacement field is computed as a sum of finite functions with unknown coefficients. These aforementioned functions satisfy exactly both the homogeneous and inhomogeneous boundary conditions in the proposed media. It is worth mentioning that the unknown coefficients are determined by implementing the principle of minimum potential energy. The numerical integration is implemented by employing the Generalized Gaussian Quadrature solution. Furthermore, with the aid of the calculated unknown coefficients, the displacement field, as well as the other parameters of the elastic field, are obtainable as well. Finally, the comparison of the previous analytical method with the current semi-analytical method proposes the efficacy of the present methodology.Keywords: anisotropic elastic media, semi-analytical method, elastic field, generalized gaussian quadrature solution
Procedia PDF Downloads 32710540 Evaluation of Ensemble Classifiers for Intrusion Detection
Authors: M. Govindarajan
Abstract:
One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy
Procedia PDF Downloads 25210539 Images Selection and Best Descriptor Combination for Multi-Shot Person Re-Identification
Authors: Yousra Hadj Hassen, Walid Ayedi, Tarek Ouni, Mohamed Jallouli
Abstract:
To re-identify a person is to check if he/she has been already seen over a cameras network. Recently, re-identifying people over large public cameras networks has become a crucial task of great importance to ensure public security. The vision community has deeply investigated this area of research. Most existing researches rely only on the spatial appearance information from either one or multiple person images. Actually, the real person re-id framework is a multi-shot scenario. However, to efficiently model a person’s appearance and to choose the best samples to remain a challenging problem. In this work, an extensive comparison of descriptors of state of the art associated with the proposed frame selection method is studied. Specifically, we evaluate the samples selection approach using multiple proposed descriptors. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two standard datasets PRID2011 and iLIDS-VID.Keywords: camera network, descriptor, model, multi-shot, person re-identification, selection
Procedia PDF Downloads 28510538 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories
Procedia PDF Downloads 28610537 Clustering Based Level Set Evaluation for Low Contrast Images
Authors: Bikshalu Kalagadda, Srikanth Rangu
Abstract:
The important object of images segmentation is to extract objects with respect to some input features. One of the important methods for image segmentation is Level set method. Generally medical images and synthetic images with low contrast of pixel profile, for such images difficult to locate interested features in images. In conventional level set function, develops irregularity during its process of evaluation of contour of objects, this destroy the stability of evolution process. For this problem a remedy is proposed, a new hybrid algorithm is Clustering Level Set Evolution. Kernel fuzzy particles swarm optimization clustering with the Distance Regularized Level Set (DRLS) and Selective Binary, and Gaussian Filtering Regularized Level Set (SBGFRLS) methods are used. The ability of identifying different regions becomes easy with improved speed. Efficiency of the modified method can be evaluated by comparing with the previous method for similar specifications. Comparison can be carried out by considering medical and synthetic images.Keywords: segmentation, clustering, level set function, re-initialization, Kernel fuzzy, swarm optimization
Procedia PDF Downloads 35410536 The Study of Visible Light Active Bismuth Modified Nitrogen Doped Titanium Dioxide Photocatlysts
Authors: B. Benalioua, I. Benyamina, A. Bentouami, B. Boury
Abstract:
The objective of this study is based on the synthesis of a new photocatalyst based on TiO2 and its application in the photo-degradation of an acid dye under the visible light. The material obtained was characterized by different techniques like diffuse reflectance UV–Vis spectroscopy (DRS), X-ray diffraction (XRD) and scanning electron microscopy (SEM). The photocatalytic efficiency of the Bi, N co-doped TiO2 treated at 600°C for 1 h was tested on the Indigo Carmine under the irradiation of visible light and compared with that of the commercial titanium oxide TiO2-P25 (Degussa). The XRD characterization of the material Bi -N- TiO2 (600°C) revealed the presence of the anatase phase and the absence of the rutile phase in comparison of the TiO2 P25 diffractogram. Characterization by UV- visible diffuse reflection (DRS) material showed that the Bi-N-TiO2 exhibits redshift (move visible) relative to commercial titanium oxide TiO2-P25, this property promises a photocatalytic activity of Bi-N-TiO2 under visible light. Indeed, the efficiency of photocatalytic Bi-N-TiO2 as a visible light is shown by a complete discoloration of indigo carmine solution of 16 mg/L after 40 minutes, whereas with the P25-TiO2 discoloration is achieved after 90 minutes.Keywords: POA, heterogeneous photocatalysis, TiO2, co-doping
Procedia PDF Downloads 38010535 Differences in Assessing Hand-Written and Typed Student Exams: A Corpus-Linguistic Study
Authors: Jutta Ransmayr
Abstract:
The digital age has long arrived at Austrian schools, so both society and educationalists demand that digital means should be integrated accordingly to day-to-day school routines. Therefore, the Austrian school-leaving exam (A-levels) can now be written either by hand or by using a computer. However, the choice of writing medium (pen and paper or computer) for written examination papers, which are considered 'high-stakes' exams, raises a number of questions that have not yet been adequately investigated and answered until recently, such as: What effects do the different conditions of text production in the written German A-levels have on the component of normative linguistic accuracy? How do the spelling skills of German A-level papers written with a pen differ from those that the students wrote on the computer? And how is the teacher's assessment related to this? Which practical desiderata for German didactics can be derived from this? In a trilateral pilot project of the Austrian Center for Digital Humanities (ACDH) of the Austrian Academy of Sciences and the University of Vienna in cooperation with the Austrian Ministry of Education and the Council for German Orthography, these questions were investigated. A representative Austrian learner corpus, consisting of around 530 German A-level papers from all over Austria (pen and computer written), was set up in order to subject it to a quantitative (corpus-linguistic and statistical) and qualitative investigation with regard to the spelling and punctuation performance of the high school graduates and the differences between pen- and computer-written papers and their assessments. Relevant studies are currently available mainly from the Anglophone world. These have shown that writing on the computer increases the motivation to write, has positive effects on the length of the text, and, in some cases, also on the quality of the text. Depending on the writing situation and other technical aids, better results in terms of spelling and punctuation could also be found in the computer-written texts as compared to the handwritten ones. Studies also point towards a tendency among teachers to rate handwritten texts better than computer-written texts. In this paper, the first comparable results from the German-speaking area are to be presented. Research results have shown that, on the one hand, there are significant differences between handwritten and computer-written work with regard to performance in orthography and punctuation. On the other hand, the corpus linguistic investigation and the subsequent statistical analysis made it clear that not only the teachers' assessments of the students’ spelling performance vary enormously but also the overall assessments of the exam papers – the factor of the production medium (pen and paper or computer) also seems to play a decisive role.Keywords: exam paper assessment, pen and paper or computer, learner corpora, linguistics
Procedia PDF Downloads 17410534 Evaluation of Suitable Housing System for Adoption in Addis Ababa
Authors: Yidnekachew Daget, Hong Zhang
Abstract:
The decision-making process in order to select the suitable housing system for application in housing construction has been a challenge for many developing countries. This study evaluates the decision process to identify the suitable housing systems for adoption in Addis Ababa. Ten industrialized housing systems were considered as alternatives for comparison. These systems have been used in a housing development in different parts of the world. A relevant literature review and contextual analysis were conducted. An analytical hierarchy process and an Expert Choice Comparion platform were employed as a research technique and tool to evaluate the professionals’ level of preferences with regard to the housing systems. The findings revealed the priority rank and characteristics of the suitable housing systems to be adapted for application in housing development. The decision criteria and the analytical process used in this study can help the decision-makers and the housing developers in developing countries make effective evaluations and decisions.Keywords: analytical hierarchy process, decision-making, expert choice comparion, industrialized housing systems
Procedia PDF Downloads 27210533 Estimation of Hysteretic Damping in Steel Dual Systems with Buckling Restrained Brace and Moment Resisting Frame
Authors: Seyed Saeid Tabaee, Omid Bahar
Abstract:
Nowadays, using energy dissipation devices has been commonly used in structures. A high rate of energy absorption during earthquakes is the benefit of using such devices, which results in damage reduction of structural elements specifically columns. The hysteretic damping capacity of energy dissipation devices is the key point that it may adversely complicate analysis and design of such structures. This effect may be generally represented by equivalent viscous damping. The equivalent viscous damping may be obtained from the expected hysteretic behavior under the design or maximum considered displacement of a structure. In this paper, the hysteretic damping coefficient of a steel moment resisting frame (MRF), which its performance is enhanced by a buckling restrained brace (BRB) system has been evaluated. Having the foresight of damping fraction between BRB and MRF is inevitable for seismic design procedures like Direct Displacement-Based Design (DDBD) method. This paper presents an approach to calculate the damping fraction for such systems by carrying out the dynamic nonlinear time history analysis (NTHA) under harmonic loading, which is tuned to the natural frequency of the system. Two steel moment frame structures, one equipped with BRB, and the other without BRB are simultaneously studied. The extensive analysis shows that proportion of each system damping fraction may be calculated by its shear story portion. In this way, the contribution of each BRB in the floors and their general contribution in the structural performance may be clearly recognized, in advance.Keywords: buckling restrained brace, direct displacement based design, dual systems, hysteretic damping, moment resisting frames
Procedia PDF Downloads 43610532 Performance of the New Laboratory-Based Algorithm for HIV Diagnosis in Southwestern China
Authors: Yanhua Zhao, Chenli Rao, Dongdong Li, Chuanmin Tao
Abstract:
The Chinese Centers for Disease Control and Prevention (CCDC) issued a new laboratory-based algorithm for HIV diagnosis on April 2016, which initially screens with a combination HIV-1/HIV-2 antigen/antibody fourth-generation immunoassay (IA) followed, when reactive, an HIV-1/HIV-2 undifferentiated antibody IA in duplicate. Reactive specimens with concordant results undergo supplemental tests with western blots, or HIV-1 nucleic acid tests (NATs) and non-reactive specimens with discordant results receive HIV-1 NATs or p24 antigen tests or 2-4 weeks follow-up tests. However, little data evaluating the application of the new algorithm have been reported to date. The study was to evaluate the performance of new laboratory-based HIV diagnostic algorithm in an inpatient population of Southwest China over the initial 6 months by compared with the old algorithm. Plasma specimens collected from inpatients from May 1, 2016, to October 31, 2016, are submitted to the laboratory for screening HIV infection performed by both the new HIV testing algorithm and the old version. The sensitivity and specificity of the algorithms and the difference of the categorized numbers of plasmas were calculated. Under the new algorithm for HIV diagnosis, 170 of the total 52 749 plasma specimens were confirmed as positively HIV-infected (0.32%). The sensitivity and specificity of the new algorithm were 100% (170/170) and 100% (52 579/52 579), respectively; while 167 HIV-1 positive specimens were identified by the old algorithm with sensitivity 98.24% (167/170) and 100% (52 579/52 579), respectively. Three acute HIV-1 infections (AHIs) and two early HIV-1 infections (EHIs) were identified by the new algorithm; the former was missed by old procedure. Compared with the old version, the new algorithm produced fewer WB-indeterminate results (2 vs. 16, p = 0.001), which led to fewer follow-up tests. Therefore, the new HIV testing algorithm is more sensitive for detecting acute HIV-1 infections with maintaining the ability to verify the established HIV-1 infections and can dramatically decrease the greater number of WB-indeterminate specimens.Keywords: algorithm, diagnosis, HIV, laboratory
Procedia PDF Downloads 40410531 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing
Authors: Kedar Hardikar, Joe Varghese
Abstract:
Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applicationsKeywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.
Procedia PDF Downloads 14010530 Modeling the Risk Perception of Pedestrians Using a Nested Logit Structure
Authors: Babak Mirbaha, Mahmoud Saffarzadeh, Atieh Asgari Toorzani
Abstract:
Pedestrians are the most vulnerable road users since they do not have a protective shell. One of the most common collisions for them is pedestrian-vehicle at intersections. In order to develop appropriate countermeasures to improve safety for them, researches have to be conducted to identify the factors that affect the risk of getting involved in such collisions. More specifically, this study investigates factors such as the influence of walking alone or having a baby while crossing the street, the observable age of pedestrian, the speed of pedestrians and the speed of approaching vehicles on risk perception of pedestrians. A nested logit model was used for modeling the behavioral structure of pedestrians. The results show that the presence of more lanes at intersections and not being alone especially having a baby while crossing, decrease the probability of taking a risk among pedestrians. Also, it seems that teenagers show more risky behaviors in crossing the street in comparison to other age groups. Also, the speed of approaching vehicles was considered significant. The probability of risk taking among pedestrians decreases by increasing the speed of approaching vehicle in both the first and the second lanes of crossings.Keywords: pedestrians, intersection, nested logit, risk
Procedia PDF Downloads 19010529 The Effectiveness and Accuracy of the Schulte Holt IOL Toric Calculator Processor in Comparison to Manually Input Data into the Barrett Toric IOL Calculator
Authors: Gabrielle Holt
Abstract:
This paper is looking to prove the efficacy of the Schulte Holt IOL Toric Calculator Processor (Schulte Holt ITCP). It has been completed using manually inputted data into the Barrett Toric Calculator and comparing the number of minutes taken to complete the Toric calculations, the number of errors identified during completion, and distractions during completion. It will then compare that data to the number of minutes taken for the Schulte Holt ITCP to complete also, using the Barrett method, as well as the number of errors identified in the Schulte Holt ITCP. The data clearly demonstrate a momentous advantage to the Schulte Holt ITCP and notably reduces time spent doing Toric Calculations, as well as reducing the number of errors. With the ever-growing number of cataract surgeries taking place around the world and the waitlists increasing -the Schulte Holt IOL Toric Calculator Processor may well demonstrate a way forward to increase the availability of ophthalmologists and ophthalmic staff while maintaining patient safety.Keywords: Toric, toric lenses, ophthalmology, cataract surgery, toric calculations, Barrett
Procedia PDF Downloads 10010528 Analysis of Soft and Hard X-Ray Intensities Using Different Shapes of Anodes in a 4kJ Mather Type Plasma Focus Facility
Authors: Mahsa Mahtab, Morteza Habibi
Abstract:
The effect of different anode tip geometries on the intensity of soft and hard x-ray emitted from a 4 kJ plasma focus device is investigated. For this purpose, 5 different anode tips are used. The shapes of the uppermost region of these anodes have been cylindrical-flat, cylindrical-hollow, spherical-convex, cone-flat and cone-hollow. Analyzed data have shown that cone-flat, spherical-convex and cone-hollow anodes significantly increase X-ray intensity respectively in comparison with cylindrical-flat anode; while the cylindrical-hollow tip decreases. Anode radius reduction at its end in conic or spherical anodes enhance SXR by increasing plasma density through collecting a greater mass of gas and more gradual transition phase to form a more stable dense plasma pinch. Also, HXR is enhanced by increasing the energy of electrons colliding with the anode surface through raise of induced electrical field. Finally, the cone-flat anode is introduced to use in cases in which the plasma focus device is used as an X-ray source due to its highest yield of X-ray emissions.Keywords: plasma focus, anode tip, HXR, SXR, pinched plasma
Procedia PDF Downloads 40110527 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score
Procedia PDF Downloads 13710526 Assessing the Impact of Autonomous Vehicles on Supply Chain Performance – A Case Study of Agri-Food Supply Chain
Authors: Nitish Suvarna, Anjali Awasthi
Abstract:
In an era marked by rapid technological advancements, the integration of Autonomous Vehicles into supply chain networks represents a transformative shift, promising to redefine the paradigms of logistics and transportation. This thesis delves into a comprehensive assessment of the impact of autonomous vehicles on supply chain performance, with a particular focus on network design, operational efficiency, and environmental sustainability. Employing the advanced simulation capabilities of anyLogistix (ALX), the study constructs a digital twin of a conventional supply chain network, encompassing suppliers, production facilities, distribution centers, and customer endpoints. The research methodically integrates Autonomous Vehicles into this intricate network, aiming to unravel the multifaceted effects on transportation logistics including transit times, cost-efficiency, and sustainability. Through simulations and scenarios analysis, the study scrutinizes the operational resilience and adaptability of supply chains in the face of dynamic market conditions and disruptive technologies like Autonomous Vehicles. Furthermore, the thesis undertakes carbon footprint analysis, quantifying the environmental benefits and challenges associated with the adoption of Autonomous Vehicles in supply chain operations. The insights from this research are anticipated to offer a strategic framework for industry stakeholders, guiding the adoption of Autonomous Vehicles to foster a more efficient, responsive, and sustainable supply chain ecosystem. The findings aim to serve as a cornerstone for future research and practical implementations in the realm of intelligent transportation and supply chain management.Keywords: autonomous vehicle, agri-food supply chain, ALX simulation, anyLogistix
Procedia PDF Downloads 8010525 Experimental Assessment of Polypropylene Plastic Aggregates(PPA) for Pavement Construction: Their Mechanical Properties via Marshall Test
Authors: Samiullah Bhatti, Safdar Abbas Zaidi, Syed Murtaza Ali Jafri
Abstract:
This research paper presents the results of using plastic aggregate in flexible pavement. Plastic aggregates have been prepared with polypropylene (PP) recycled products and have been tested with Marshall apparatus. Grade 60/70 bitumen has been chosen for this research with a total content of 2.5 %, 3 % and 3.5 %. Plastic aggregates are mixed with natural aggregates with different proportions and it ranges from 10 % to 100 % with an increment of 10 %. Therefore, a total of 10 Marshall cakes were prepared with plastic aggregates in addition to a standard pavement sample. In total 33 samples have been tested for Marshall stability, flow and voids in mineral aggregates. The results show an increase in the value when it changes from 2.5 % bitumen to 3 % and after then it goes again toward declination. Thus, 3 % bitumen content has been found as the most optimum value for flexible pavements. Among all the samples, 20 % PP aggregates sample has been found satisfactory with respect to all the standards provided by ASTM. Therefore, it is suggested to use 20 plastic aggregates in flexible pavement construction. A comparison of bearing capacity and skid resistance is also observed.Keywords: marshall test, polypropylene plastic, plastic aggregates, flexible pavement alternative, recycling of plastic waste
Procedia PDF Downloads 14810524 Comparison between Radiocarbon and Dendrochronology Ages Obtained on a 700 Years Tree-Ring Sequence from Northern Romania
Authors: G. Sava, I. Popa, T. Sava, A. Ion, M. Ilie, C. Manailescu, A. Robu
Abstract:
At the RoAMS laboratory in Bucharest we have looked for a head-to-head meeting between AMS radiocarbon dating and dendrochronology dating, aiming to point out and explain any differences or similarities that might appear between their output results. As a subject of this investigation, we have fixed our attention on a sequence of tree rings spanning on a period of 700 years, starting with 1000 AD. The samples were collected from the northern Romanian territory within Moldavia region, and were provided by the ‘Marin Dracea - National Institute for Research and Development in Forestry’. All the 23 single ring wood samples were radiocarbon dated using alpha-cellulose extraction, followed by graphitization in an AGE3 installation. A wiggle matching procedure was applied to reduce the radiocarbon uncertainties for the calibrated ages. The results showed a good agreement on 3 out of 4 wood cores, the age-shifting of one of the wood cores being interpreted as an uncertain dendrochronology matching, which was further corrected.Keywords: wiggle matching, tree-ring radiocarbon dating, dendrochronology, AMS radiocarbon dating, radiocarbon dating in Romania
Procedia PDF Downloads 18610523 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory
Authors: Hiba El Assibi
Abstract:
This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory
Procedia PDF Downloads 58