Search results for: performance prism model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25956

Search results for: performance prism model

21246 An Exploratory Study to Investigate the Impact of Corporate Social Responsibility on Luxury Brand Avoidance in India

Authors: Glyn Atwal, Douglas Bryson

Abstract:

The rapid expansion of a consumer class in India has also coincided with an increasing awareness of social and environmental issues. The overall objective of this study explores to what extent Corporate Social Responsibility (CSR) can lead to luxury brand avoidance within an Indian context. In-depth interviews were conducted with luxury consumers in New Delhi. The demographic breakdown of those interviewed was 16 males and 9 females, aged between 21 and 44. Antecedents of brand avoidance could be sorted according to two main categories. The first category was consumer dissatisfaction due to poor product or service performance. Customer service, particularly within the hospitality sector, was identified as a defining source of brand avoidance. The second category was negative stereotypes of brand users. A salient finding was that no single participant explicitly identified CSR as a source of brand avoidance. However, the interviews revealed that luxury consumers are in fact concerned about CSR issues but assume that international luxury brands have a positive record on CSR performance. Interestingly, participants placed greater emphasis on the broader interpretation of ‘corporate reputation’ rather than specific social or environmental issues to determine the CSR performance of a luxury brand. The findings reported in this exploratory study suggest that Indian luxury consumers do value the overall CSR performance of luxury brands expressed as a brand responsibility or brand reputation, and this is a potential source of brand avoidance. International luxury brands need, therefore, consider developing but also communicating a positive CSR strategy in order to reduce the risk of customers forming negative opinions about the brand.

Keywords: brand avoidance, CSR, luxury

Procedia PDF Downloads 298
21245 Knowledge Management (KM) Practices: A Study of KM Adoption among Doctors in Kuwait

Authors: B. Alajmi, L. Marouf, A. S. Chaudhry

Abstract:

In recent years, increasing emphasis has been placed upon issues concerning the evaluation of health care. In this regard, knowledge management has also been considered an important component of the evaluation process. KM facilitates the transfer of existing knowledge or the development of new knowledge among healthcare staff and patients. This research aimed to examine how hospitals in Kuwait employ knowledge management practices, including capturing, sharing, and generating, and the perceived impact of KM practices on performance of hospitals in Kuwait. Through adopting a quantitative survey method with 277 sample of doctors, the study found that in terms of the three major knowledge management practices – knowledge capturing, sharing, and generating – the adoption of KM practices were rated very low in the sampled hospitals in Kuwait. Hospitals paid little attention to the main activities that support the transfer of expertise among doctors in hospitals. However, as predicted by previous studies, knowledge management practices were perceived to have an impact on hospitals’ performance. Through knowledge capturing, sharing, and generating, hospitals could improve the services they provide through documenting best practices, transforming their hospitals into learning organizations in which lessons learned are captured, stored, and made available for others to learn from.

Keywords: knowledge management, hospitals, knowledge management practices, knowledge management tools, performance

Procedia PDF Downloads 487
21244 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 81
21243 Development of a Standardization Methodology Assessing the Comfort Performance for Hanok

Authors: Mi-Hyang Lee, Seung-Hoon Han

Abstract:

Korean traditional residences have been built with deep design issues for various values such as social, cultural, and environmental influences to be started from a few thousand years ago, but its meaning is being vanished due to the different lifestyles these days. It is necessary, therefore, to grasp the meaning of the Korea traditional building called Hanok and to get Korean people understand its real advantages. The purpose of this study is to propose a standardization methodology for evaluating comfort features towards Korean traditional houses. This paper is also trying to build an official standard evaluation system and to integrate aesthetic and psychological values induced from Hanok. Its comfort performance values could be divided into two large categories that are physical and psychological, and fourteen methods have been defined as the Korean Standards (KS). For this research, field survey data from representative Hanok types were collected for each method. This study also contains a qualitative in-depth analysis of the Hanok comfort index by the professions using AHP (Analytical Hierarchy Process) and has examined the effect of the methods. As a result, this paper could define what methods can provide trustful outcomes and how to evaluate the own strengths in aspects of spatial comfort of Hanok using suggested procedures towards the spatial configuration of the traditional dwellings. This study has finally proposed an integrated development of a standardization methodology assessing the comfort performance for Korean traditional residences, and it is expected that they could evaluate inhabitants of the residents and interior environmental conditions especially structured by wood materials like Hanok.

Keywords: Hanok, comfort performance, human condition, analytical hierarchy process

Procedia PDF Downloads 140
21242 Inclusion and Changes of a Research Criterion in the Institute for Quality and Accreditation of Computing, Engineering and Technology Accreditation Model

Authors: J. Daniel Sanchez Ruiz

Abstract:

The paper explains why and how a research criterion was included within an accreditation system for undergraduate engineering programs, in spite of not being a common practice of accreditation agencies at a global level. This paper is divided into three parts. The first presents the context and the motivations that led the Institute for Quality and Accreditation of Computing, Engineering and Technology Programs (ICACIT) to add a research criterion. The second describes the criterion adopted and the feedback received during 2017 accreditation cycle. The third, the author proposes changes to the accreditation criteria that respond in a pertinent way to the results-based accreditation model and the national context. The author seeks to reconcile an outcome based accreditation model, aligned with the established by the International Engineering Alliance, with the particular context of higher education in Peru.

Keywords: accreditation, engineering education, quality assurance, research

Procedia PDF Downloads 272
21241 Low-Cost Image Processing System for Evaluating Pavement Surface Distress

Authors: Keerti Kembhavi, M. R. Archana, V. Anjaneyappa

Abstract:

Most asphalt pavement condition evaluation use rating frameworks in which asphalt pavement distress is estimated by type, extent, and severity. Rating is carried out by the pavement condition rating (PCR), which is tedious and expensive. This paper presents the development of a low-cost technique for image pavement distress analysis that permits the identification of pothole and cracks. The paper explores the application of image processing tools for the detection of potholes and cracks. Longitudinal cracking and pothole are detected using Fuzzy-C- Means (FCM) and proceeded with the Spectral Theory algorithm. The framework comprises three phases, including image acquisition, processing, and extraction of features. A digital camera (Gopro) with the holder is used to capture pavement distress images on a moving vehicle. FCM classifier and Spectral Theory algorithms are used to compute features and classify the longitudinal cracking and pothole. The Matlab2016Ra Image preparing tool kit utilizes performance analysis to identify the viability of pavement distress on selected urban stretches of Bengaluru city, India. The outcomes of image evaluation with the utilization semi-computerized image handling framework represented the features of longitudinal crack and pothole with an accuracy of about 80%. Further, the detected images are validated with the actual dimensions, and it is seen that dimension variability is about 0.46. The linear regression model y=1.171x-0.155 is obtained using the existing and experimental / image processing area. The R2 correlation square obtained from the best fit line is 0.807, which is considered in the linear regression model to be ‘large positive linear association’.

Keywords: crack detection, pothole detection, spectral clustering, fuzzy-c-means

Procedia PDF Downloads 165
21240 Kinetic Modeling of Colour and Textural Properties of Stored Rohu (Labeo rohita) Fish

Authors: Pramod K. Prabhakar, Prem P. Srivastav

Abstract:

Rohu (Labeo rohita) is an Indian major carp and highly relished freshwater food for its unique flavor, texture, and culinary properties. It is highly perishable and, spoilage occurs as a result of series of complicated biochemical changes brought about by enzymes which are the function of time and storage temperature also. The influence of storage temperature (5, 0, and -5 °C) on colour and texture of fish were studied during 14 days storage period in order to analyze kinetics of colour and textural changes. The rate of total colour change was most noticeable at the highest storage temperature (5°C), and these changes were well described by the first order reaction. Texture is an important variable of quality of the fish and is increasing concern to aquaculture industries. Textural parameters such as hardness, toughness and stiffness were evaluated on a texture analyzer for the different day of stored fish. The significant reduction (P ≤ 0.05) in hardness was observed after 2nd, 4th and 8th day for the fish stored at 5, 0, and -5 °C respectively. The textural changes of fish during storage followed a first order kinetic model and fitted well with this model (R2 > 0.95). However, the textural data with respect to time was also fitted to modified Maxwell model and found to be good fit with R2 value ranges from 0.96 to 0.98. Temperature dependence of colour and texture change was adequately modelled with the Arrhenius type equation. This fitted model may be used for the determination of shelf life of Rohu Rohu (Labeo rohita) Fish.

Keywords: first order kinetics, biochemical changes, Maxwell model, colour, texture, Arrhenius type equation

Procedia PDF Downloads 218
21239 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach

Authors: D. Tedesco, G. Feletti, P. Trucco

Abstract:

The present study aims to develop a Decision Support System (DSS) to support the operational decision of the Emergency Medical Service (EMS) regarding the assignment of medical emergency requests to Emergency Departments (ED). In the literature, this problem is also known as “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies are mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a request. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the transport time and release the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs, considering information relating to the subsequent phases of the process, such as the case-mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to evaluate different hospital selection policies. Therefore, the next steps of the research consisted of the development of a general simulation architecture, its implementation in the AnyLogic software and its validation on a realistic dataset. The hospital selection policy that produced the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, which is based on a retrospective estimate of the TTP, and a dynamic approach, which is based on a predictive estimate of the TTP determined with a constantly updated Winters model. Findings reveal that considering the minimization of TTP as a hospital selection policy raises several benefits. It allows to significantly reduce service throughput times in the ED with a minimum increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case-mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms of TTP estimation than a retrospective approach but entails a more difficult application. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.

Keywords: discrete event simulation, emergency medical services, forecast model, hospital selection

Procedia PDF Downloads 77
21238 A Simple Model for Solar Panel Efficiency

Authors: Stefano M. Spagocci

Abstract:

The efficiency of photovoltaic panels can be calculated with such software packages as RETScreen that allow design engineers to take financial as well as technical considerations into account. RETScreen is interfaced with meteorological databases, so that efficiency calculations can be realistically carried out. The author has recently contributed to the development of solar modules with accumulation capability and an embedded water purifier, aimed at off-grid users such as users in developing countries. The software packages examined do not allow to take ancillary equipment into account, hence the decision to implement a technical and financial model of the system. The author realized that, rather than re-implementing the quite sophisticated model of RETScreen - a mathematical description of which is anyway not publicly available - it was possible to drastically simplify it, including the meteorological factors which, in RETScreen, are presented in a numerical form. The day-by-day efficiency of a photovoltaic solar panel was parametrized by the product of factors expressing, respectively, daytime duration, solar right ascension motion, solar declination motion, cloudiness, temperature. For the sun-motion-dependent factors, positional astronomy formulae, simplified by the author, were employed. Meteorology-dependent factors were fitted by simple trigonometric functions, employing numerical data supplied by RETScreen. The accuracy of our model was tested by comparing it to the predictions of RETScreen; the accuracy obtained was 11%. In conclusion, our study resulted in a model that can be easily implemented in a spreadsheet - thus being easily manageable by non-specialist personnel - or in more sophisticated software packages. The model was used in a number of design exercises, concerning photovoltaic solar panels and ancillary equipment like the above-mentioned water purifier.

Keywords: clean energy, energy engineering, mathematical modelling, photovoltaic panels, solar energy

Procedia PDF Downloads 35
21237 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis

Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy

Abstract:

Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.

Keywords: associated cervical cancer, data mining, random forest, logistic regression

Procedia PDF Downloads 69
21236 Improve B-Tree Index’s Performance Using Lock-Free Hash Table

Authors: Zhanfeng Ma, Zhiping Xiong, Hu Yin, Zhengwei She, Aditya P. Gurajada, Tianlun Chen, Ying Li

Abstract:

Many RDBMS vendors use B-tree index to achieve high performance for point queries and range queries, and some of them also employ hash index to further enhance the performance as hash table is more efficient for point queries. However, there are extra overheads to maintain a separate hash index, for example, hash mapping for all data records must always be maintained, which results in more memory space consumption; locking, logging and other mechanisms are needed to guarantee ACID, which affects the concurrency and scalability of the system. To relieve the overheads, Hash Cached B-tree (HCB) index is proposed in this paper, which consists of a standard disk-based B-tree index and an additional in-memory lock-free hash table. Initially, only the B-tree index is constructed for all data records, the hash table is built on the fly based on runtime workload, only data records accessed by point queries are indexed using hash table, this helps reduce the memory footprint. Changes to hash table are done using compare-and-swap (CAS) without performing locking and logging, this helps improve the concurrency and avoid contention. The hash table is also optimized to be cache conscious. HCB index is implemented in SAP ASE database, compared with the standard B-tree index, early experiments and customer adoptions show significant performance improvement. This paper provides an overview of the design of HCB index and reports the experimental results.

Keywords: B-tree, compare-and-swap, lock-free hash table, point queries, range queries, SAP ASE database

Procedia PDF Downloads 273
21235 Target and Biomarker Identification Platform to Design New Drugs against Aging and Age-Related Diseases

Authors: Peter Fedichev

Abstract:

We studied fundamental aspects of aging to develop a mathematical model of gene regulatory network. We show that aging manifests itself as an inherent instability of gene network leading to exponential accumulation of regulatory errors with age. To validate our approach we studied age-dependent omic data such as transcriptomes, metabolomes etc. of different model organisms and humans. We build a computational platform based on our model to identify the targets and biomarkers of aging to design new drugs against aging and age-related diseases. As biomarkers of aging, we choose the rate of aging and the biological age since they completely determine the state of the organism. Since rate of aging rapidly changes in response to an external stress, this kind of biomarker can be useful as a tool for quantitative efficacy assessment of drugs, their combinations, dose optimization, chronic toxicity estimate, personalized therapies selection, clinical endpoints achievement (within clinical research), and death risk assessments. According to our model, we propose a method for targets identification for further interventions against aging and age-related diseases. Being a biotech company, we offer a complete pipeline to develop an anti-aging drug-candidate.

Keywords: aging, longevity, biomarkers, senescence

Procedia PDF Downloads 263
21234 Power Control of DFIG in WECS Using Backstipping and Sliding Mode Controller

Authors: Abdellah Boualouch, Ahmed Essadki, Tamou Nasser, Ali Boukhriss, Abdellatif Frigui

Abstract:

This paper presents a power control for a Doubly Fed Induction Generator (DFIG) using in Wind Energy Conversion System (WECS) connected to the grid. The proposed control strategy employs two nonlinear controllers, Backstipping (BSC) and sliding-mode controller (SMC) scheme to directly calculate the required rotor control voltage so as to eliminate the instantaneous errors of active and reactive powers. In this paper the advantages of BSC and SMC are presented, the performance and robustness of this two controller’s strategy are compared between them. First, we present a model of wind turbine and DFIG machine, then a synthesis of the controllers and their application in the DFIG power control. Simulation results on a 1.5MW grid-connected DFIG system are provided by MATLAB/Simulink.

Keywords: backstipping, DFIG, power control, sliding-mode, WESC

Procedia PDF Downloads 577
21233 Fapitow: An Advanced AI Agent for Travel Agent Competition

Authors: Faiz Ul Haque Zeya

Abstract:

In this paper, Fapitow’s bidding strategy and approach to participate in Travel Agent Competition (TAC) is described. Previously, Fapitow is designed using the agents provided by the TAC Team and mainly used their modification for developing our strategy. But later, by observing the behavior of the agent, it is decided to come up with strategies that will be the main cause of improved utilities of the agent, and by theoretical examination, it is evident that the strategies will provide a significant improvement in performance which is later proved by agent’s performance in the games. The techniques and strategies for further possible improvement are also described. TAC provides a real-time, uncertain environment for learning, experimenting, and implementing various AI techniques. Some lessons learned about handling uncertain environments are also presented.

Keywords: agent, travel agent competition, bidding, TAC

Procedia PDF Downloads 90
21232 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 196
21231 Person Re-Identification using Siamese Convolutional Neural Network

Authors: Sello Mokwena, Monyepao Thabang

Abstract:

In this study, we propose a comprehensive approach to address the challenges in person re-identification models. By combining a centroid tracking algorithm with a Siamese convolutional neural network model, our method excels in detecting, tracking, and capturing robust person features across non-overlapping camera views. The algorithm efficiently identifies individuals in the camera network, while the neural network extracts fine-grained global features for precise cross-image comparisons. The approach's effectiveness is further accentuated by leveraging the camera network topology for guidance. Our empirical analysis on benchmark datasets highlights its competitive performance, particularly evident when background subtraction techniques are selectively applied, underscoring its potential in advancing person re-identification techniques.

Keywords: camera network, convolutional neural network topology, person tracking, person re-identification, siamese

Procedia PDF Downloads 58
21230 Soccer Match Result Prediction System (SMRPS) Model

Authors: Ajayi Olusola Olajide, Alonge Olaide Moses

Abstract:

Predicting the outcome of soccer matches poses an interesting challenge for which it is realistically impossible to successfully do so for every match. Despite this, there are lots of resources that are being expended on the correct prediction of soccer matches weekly, and all over the world. Soccer Match Result Prediction System Model (SMRPSM) is a system that is proposed whereby the results of matches between two soccer teams are auto-generated, with the added excitement of giving users a chance to test their predictive abilities. Soccer teams from different league football are loaded by the application, with each team’s corresponding manager and other information like team location, team logo and nickname. The user is also allowed to interact with the system by selecting the match to be predicted and viewing of the results of completed matches after registering/logging in.

Keywords: predicting, soccer match, outcome, soccer, matches, result prediction, system, model

Procedia PDF Downloads 477
21229 Performance Improvement of UWB Corrugated Antipodal Vivaldi Antenna Using Spiral Shape Negative Index Metamaterial

Authors: Rahul Singha, D. Vakula

Abstract:

This paper presents a corrugated antipodal vivaldi antenna with improved performance by using negative index metamaterial (NIM) of the Archimedean spiral design. A single layer NIM piece is placed perpendicular middle of the two arm of the proposed antenna. The antenna size is 30×60×0.787 mm3 operating at 8GHz. The simulated results of NIM corrugated antipodal vivaldi antenna show that the gain and directivity has increased up to 1.2dB and 1dB respectively. The HPBW is increased by 90 with the reflection coefficient less than ‒10 dB from 4.7 GHz to 11 GHz for UWB application.

Keywords: Negative Index Metamaterial (NIM), Ultra Wide Band (UWB), Half Power Beam Width (HPBW), vivaldi antenna

Procedia PDF Downloads 605
21228 Quality Management in Construction Project

Authors: Harsh Panchal, Saurabh Amrutkar

Abstract:

Quality management is an essential part of any project that has directly related to the performance of a project. Quality management is depended on multiple factors at different stages in a project, right from time management to construction logistics. A project is a mixture of various components that include iternary management, health and safety, crew productivity, and many more. From the survey conducted, we came to the conclusion that advancement in technology and indigenous approach to any project will result in maximum quality standards and better project performance. In this paper, we discuss various components of the factors above that lead to compromise the quality of a project and how it can be controlled in order to achieve maximum quality assurance using quality planning and total quality management. The paper also focuses on limitations and problems faced in each factor responsible for quality management and to tackle them using techniques and processes based on activities and identifying the sequence, approaching critical path, and duration. The project management concept that deals with the sequence of scope cost time give us an overview regarding the ongoing quality management, in a nutshell, giving us hints to regulate the current procedure for maximum achievable quality. It also deals with the problems faced by engineers that make the mundane work process slow, reducing the quality outcome drastically.

Keywords: management, performance, project, quality

Procedia PDF Downloads 150
21227 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery

Authors: Mohamed Hafid, Marcel Lacroix

Abstract:

This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.

Keywords: cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method

Procedia PDF Downloads 187
21226 Effects of Cinnamon, Garlic, and Yucca Extracts on Growth Performance and Serum Biochemical Parameters in Broilers

Authors: Anguo Chen, Huajie Chen, Caimei Yang, Qihua Hong, Jun Feng

Abstract:

The experiment was conducted with 360 one-day-old Avian commercial broilers to study the effects of dietary cinnamon extract (CE), garlic extract (GE) and yucca extract (YE) on growth performance and serum biochemical parameters in broilers. The chickens were randomly divided equally into 4 treatment groups, each group with 3 replications, and received the same basal corn-bean diets included a starter from 1 d to 21 d and then a grower until 42 d, added with recommended dose 250 mg/kg CE, 25 mg/kg GE and 10 mg/kg YE to relevant group, respectively. The birds were kept in a stainless steel net coop each replication with 24 h light and were fed and drunk ad libitum. At 21 d and 42 d of age, 6 chicks were respectively picked out from every group and were bled to collect serum samples and intestinal samples for laboratory analysis. The results showed that the average daily gain (ADG) of CE, GE and YE group were increased by 7.20% (P<0.05), 3.43% (P>0.05) and 4.89% (P>0.05), feed gain ratio (F/G) was improved by 9.71% (P<0.05), 3.40% (P>0.05) and 3.40% (P>0.05) compared with the control, respectively. At 21 d of age, the content of serum urea nitrogen (SUN) and serum uric acid (SUA) and the activity of serum xanthine oxidase (SXO) in CE group were reduced by 35.17% (P<0.01), 13.73% (P<0.01) and 16.33% (P<0.05) compared with the control, respectively. At 42 d of age, SUN and SUA level and SXO activity were lowered by 24.35% (P<0.01), 15.49% (P<0.05) and 23.09% (P<0.01), respectively. The SXO activity in CE group was decreased by 14.86% (P<0.01) and 15.34%(P<0.01) compare with GE and YE group, respectively. Also, adding CE, GE and YE into broiler diets resulted in lower UN and UA level of intestinal contents. It is clear that CE was more significantly decreased the SXO activity and SUA levels than GE and YE, especially at the latter period, thereby it may play a more important role in improving the growth performance of broilers.

Keywords: cinnamon extract, broiler, growth performance, serum uric acid, serum xanthine oxidase

Procedia PDF Downloads 422
21225 Modeling of Long Wave Generation and Propagation via Seabed Deformation

Authors: Chih-Hua Chang

Abstract:

This study uses a three-dimensional (3D) fully nonlinear model to simulate the wave generation problem caused by the movement of the seabed. The numerical model is first simplified into two dimensions and then compared with the existing two-dimensional (2D) experimental data and the 2D numerical results of other shallow-water wave models. Results show that this model is different from the earlier shallow-water wave models, with the phase being closer to the experimental results of wave propagation. The results of this study are also compared with those of the 3D experimental results of other researchers. Satisfactory results can be obtained in both the waveform and the flow field. This study assesses the application of the model to simulate the wave caused by the circular (radius r0) terrain rising or falling (moving distance bm). The influence of wave-making parameters r0 and bm are discussed. This study determines that small-range (e.g., r0 = 2, normalized by the static water depth), rising, or sinking terrain will produce significant wave groups in the far field. For large-scale moving terrain (e.g., r0 = 10), uplift and deformation will potentially generate the leading solitary-like waves in the far field.

Keywords: seismic wave, wave generation, far-field waves, seabed deformation

Procedia PDF Downloads 70
21224 Estimation of the State of Charge of the Battery Using EFK and Sliding Mode Observer in MATLAB-Arduino/Labview

Authors: Mouna Abarkan, Abdelillah Byou, Nacer M'Sirdi, El Hossain Abarkan

Abstract:

This paper presents the estimation of the state of charge of the battery using two types of observers. The battery model used is the combination of a voltage source, which is the open circuit battery voltage of a strength corresponding to the connection of resistors and electrolyte and a series of parallel RC circuits representing charge transfer phenomena and diffusion. An adaptive observer applied to this model is proposed, this observer to estimate the battery state of charge of the battery is based on EFK and sliding mode that is known for their robustness and simplicity implementation. The results are validated by simulation under MATLAB/Simulink and implemented in Arduino-LabView.

Keywords: model of the battery, adaptive sliding mode observer, the EFK observer, estimation of state of charge, SOC, implementation in Arduino/LabView

Procedia PDF Downloads 289
21223 Flood Risk Assessment for Agricultural Production in a Tropical River Delta Considering Climate Change

Authors: Chandranath Chatterjee, Amina Khatun, Bhabagrahi Sahoo

Abstract:

With the changing climate, precipitation events are intensified in the tropical river basins. Since these river basins are significantly influenced by the monsoonal rainfall pattern, critical impacts are observed on the agricultural practices in the downstream river reaches. This study analyses the crop damage and associated flood risk in terms of net benefit in the paddy-dominated tropical Indian delta of the Mahanadi River. The Mahanadi River basin lies in eastern part of the Indian sub-continent and is greatly affected by the southwest monsoon rainfall extending from the month of June to September. This river delta is highly flood-prone and has suffered from recurring high floods, especially after the 2000s. In this study, the lumped conceptual model, Nedbør Afstrømnings Model (NAM) from the suite of MIKE models, is used for rainfall-runoff modeling. The NAM model is laterally integrated with the MIKE11-Hydrodynamic (HD) model to route the runoffs up to the head of the delta region. To obtain the precipitation-derived future projected discharges at the head of the delta, nine Global Climate Models (GCMs), namely, BCC-CSM1.1(m), GFDL-CM3, GFDL-ESM2G, HadGEM2-AO, IPSL-CM5A-LR, IPSL-CM5A-MR, MIROC5, MIROC-ESM-CHEM and NorESM1-M, available in the Coupled Model Intercomparison Project-Phase 5 (CMIP5) archive are considered. These nine GCMs are previously found to best-capture the Indian Summer Monsoon rainfall. Based on the performance of the nine GCMs in reproducing the historical discharge pattern, three GCMs (HadGEM2-AO, IPSL-CM5A-MR and MIROC-ESM-CHEM) are selected. A higher Taylor Skill Score is considered as the GCM selection criteria. Thereafter, the 10-year return period design flood is estimated using L-moments based flood frequency analysis for the historical and three future projected periods (2010-2039, 2040-2069 and 2070-2099) under Representative Concentration Pathways (RCP) 4.5 and 8.5. A non-dimensional hydrograph analysis is performed to obtain the hydrographs for the historical/projected 10-year return period design floods. These hydrographs are forced into the calibrated and validated coupled 1D-2D hydrodynamic model, MIKE FLOOD, to simulate the flood inundation in the delta region. Historical and projected flood risk is defined based on the information about the flood inundation simulated by the MIKE FLOOD model and the inundation depth-damage-duration relationship of a normal rice variety cultivated in the river delta. In general, flood risk is expected to increase in all the future projected time periods as compared to the historical episode. Further, in comparison to the 2010s (2010-2039), an increased flood risk in the 2040s (2040-2069) is shown by all the three selected GCMs. However, the flood risk then declines in the 2070s as we move towards the end of the century (2070-2099). The methodology adopted herein for flood risk assessment is one of its kind and may be implemented in any world-river basin. The results obtained from this study can help in future flood preparedness by implementing suitable flood adaptation strategies.

Keywords: flood frequency analysis, flood risk, global climate models (GCMs), paddy cultivation

Procedia PDF Downloads 55
21222 Analysys of Some Solutions to Protect the Tombolo of Giens

Authors: Yves Lacroix, Van Van Than, Didier Léandri, Pierre Liardet

Abstract:

The western Tombolo of the Giens peninsula in southern France, known as Almanarre beach, is subject to coastal erosion. We are trying to use computer simulation in order to propose solutions to stop this erosion. Our aim was first to determine the main factors for this erosion and successfully apply a coupled hydro-sedimentological numerical model based on observations and measurements that have been performed on the site for decades. We have gathered all available information and data about waves, winds, currents, tides, bathymetry, coastal line, and sediments concerning the site. These have been divided into two sets: one devoted to calibrating a numerical model using Mike 21 software, the other to serve as a reference in order to numerically compare the present situation to what it could be if we implemented different types of underwater constructions. This paper presents the first part of the study: selecting and melting different sources into a coherent data basis, identifying the main erosion factors, and calibrating the coupled software model against the selected reference period. Our results bring calibration of the numerical model with good fitting coefficients. They also show that the winter South-Western storm events conjugated to depressive weather conditions constitute a major factor of erosion, mainly due to wave impact in the northern part of the Almanarre beach. Together, current and wind impact is shown negligible.

Keywords: Almanarre beach, coastal erosion, hydro-sedimentological, numerical model

Procedia PDF Downloads 303
21221 Optimal Control of DC Motor Using Linear Quadratic Regulator

Authors: Meetty Tomy, Arxhana G Thosar

Abstract:

This paper provides the implementation of optimal control for an armature-controlled DC motor. The selection of error weighted Matrix and control weighted matrix in order to implement optimal control theory for improving the dynamic behavior of DC motor is presented. The closed loop performance of Armature controlled DC motor with derived linear optimal controller is then evaluated for the transient operating condition (starting). The result obtained from MATLAB is compared with that of PID controller and simple closed loop response of the motor.

Keywords: optimal control, DC motor, performance index, MATLAB

Procedia PDF Downloads 393
21220 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 158
21219 An Optimal Control Model for the Dynamics of Visceral Leishmaniasis

Authors: Ibrahim M. Elmojtaba, Rayan M. Altayeb

Abstract:

Visceral leishmaniasis (VL) is a vector-borne disease caused by the protozoa parasite of the genus leishmania. The transmission of the parasite to humans and animals occurs via the bite of adult female sandflies previously infected by biting and sucking blood of an infectious humans or animals. In this paper we use a previously proposed model, and then applied two optimal controls, namely treatment and vaccination to that model to investigate optimal strategies for controlling the spread of the disease using treatment and vaccination as the system control variables. The possible impact of using combinations of the two controls, either one at a time or two at a time on the spread of the disease is also examined. Our results provide a framework for vaccination and treatment strategies to reduce susceptible and infection individuals of VL in five years.

Keywords: visceral leishmaniasis, treatment, vaccination, optimal control, numerical simulation

Procedia PDF Downloads 392
21218 Developing and Enacting a Model for Institutional Implementation of the Humanizing Pedagogy: Case Study of Nelson Mandela University

Authors: Mukhtar Raban

Abstract:

As part of Nelson Mandela University’s journey of repositioning its learning and teaching agenda, the university adopted and foregrounded a humanizing pedagogy-aligning with institutional goals of critically transforming the academic project. The university established the Humanizing Pedagogy Praxis and Research Niche (HPPRN) as a centralized hub for coordinating institutional work exploring and advancing humanizing pedagogies and tasked the unit with developing and enacting a model for humanizing pedagogy exploration. This investigation endeavored to report on the development and enactment of a model that sought to institutionalize a humanizing pedagogy at a South African university. Having followed a qualitative approach, the investigation presents the case study of Nelson Mandela University’s HPPRN and the model it subsequently established and enacted for the advancement towards a more common institutional understanding, interpretation and application of the humanizing pedagogy. The study adopted an interpretive lens for analysis, complementing the qualitative approach of the investigation. The primary challenge that confronted the HPPRN was the development of a ‘living model’ that had to complement existing institutional initiatives while accommodating a renewed spirit of critical reflection, innovation and research of continued and new humanizing pedagogical exploration and applications. The study found that the explicit consideration of tenets of humanizing and critical pedagogies in underpinning and framing the HPPRN Model contributed to the sense of ‘lived’ humanizing pedagogy experiences during enactment. The multi-leveled inclusion of critical reflection in the development and enactment stages was found to further the processes of praxis employed at the university, which is integral to the advancement of humanizing and critical pedagogies. The development and implementation of a model that seeks to institutionalize the humanizing pedagogy at a university rely not only on sound theoretical conceptualization but also on the ‘richness of becoming more human’ explicitly expressed and encountered in praxes and application.

Keywords: humanizing pedagogy, critical pedagogy, institutional implementation, praxis

Procedia PDF Downloads 156
21217 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 139