Search results for: cluster model approach
22658 Social Identification among Employees: A System Dynamic Approach
Authors: Muhammad Abdullah, Salman Iqbal, Mamoona Rasheed
Abstract:
Social identity among people is an important source of pride and self-esteem, consequently, people struggle to preserve a positive perception of their groups and collectives. The purpose of this paper is to explain the process of social identification and to highlight the underlying causal factors of social identity among employees. There is a little research about how the social identity of employees is shaped in Pakistan’s organizational culture. This study is based on social identity theory. This study uses Systems’ approach as a research methodology. The feedback loop approach is applied to explain the underlying key elements of employee behavior that collectively form social identity among social groups in corporate arena. The findings of this study reveal that effective, evaluative and cognitive components of an individual’s personality are associated with the social identification. The system dynamic feedback loop approach has revealed the underlying structure that is associated with social identity, social group formation, and effective component proved to be the most associated factor. This may also enable to understand how social groups become stable and individuals act according to the group requirements. The value of this paper lies in the understanding gained about the underlying key factors that play a crucial role in social group formation in organizations. It may help to understand the rationale behind how employees socially categorize themselves within organizations. It may also help to design effective and more cohesive teams for better operations and long-term results. This may help to share knowledge among employees as well. The underlying structure behind the social identification is highlighted with the help of system modeling.Keywords: affective commitment, cognitive commitment, evaluated commitment, system thinking
Procedia PDF Downloads 13722657 Degradation of Irradiated UO2 Fuel Thermal Conductivity Calculated by FRAPCON Model Due to Porosity Evolution at High Burn-Up
Authors: B. Roostaii, H. Kazeminejad, S. Khakshournia
Abstract:
The evolution of volume porosity previously obtained by using the existing low temperature high burn-up gaseous swelling model with progressive recrystallization for UO2 fuel is utilized to study the degradation of irradiated UO2 thermal conductivity calculated by the FRAPCON model of thermal conductivity. A porosity correction factor is developed based on the assumption that the fuel morphology is a three-phase type, consisting of the as-fabricated pores and pores due to intergranular bubbles whitin UO2 matrix and solid fission products. The predicted thermal conductivity demonstrates an additional degradation of 27% due to porosity formation at burn-up levels around 120 MWd/kgU which would cause an increase in the fuel temperature accordingly. Results of the calculations are compared with available data.Keywords: irradiation-induced recrystallization, matrix swelling, porosity evolution, UO₂ thermal conductivity
Procedia PDF Downloads 29822656 The Prevalence of Coronary Artery Disease and Its Risk Factors in Rural and Urban Areas of Pakistan
Authors: Muhammad Kamran Hanif Khan, Fahad Mushtaq
Abstract:
Background: In both developed and underdeveloped countries, coronary artery disease (CAD) is a serious cause of death and disability. Cardiovascular disease (CVD) is becoming more prevalent in emerging countries like Pakistan due to the spread and acceptance of Western lifestyles. Material and Methods: An observational cross-sectional investigation was conducted, and data collection relied on a random cluster sampling method. The sample size for this cross-sectional study was calculated using the following factors: estimated true proportion of 17.5%, desired precision of 2%, and confidence interval of 95%. The data for this study was collected from a sample of 1387 adults. Results: The average age of those living in rural areas is 55.24 years, compared to 52.60 years for those living in urban areas. The mean fasting blood glucose of the urban participants is 105.28 mg/dL, which is higher than the mean fasting blood glucose of the rural participants, which is 102.06 mg/dL. The mean total cholesterol of the urban participants is 192.20 mg/dL, which is slightly higher than the mean total cholesterol of the rural participants, which is 191.97 mg/dL. CAD prevalence is greater in urban areas than in rural areas. ECG abnormalities prevalence is 16.1% in females compared to 12.5% in men. Conclusion: The prevalence of CAD is more common in urban areas than in rural ones for all of the measures of CAD used in the study.Keywords: CVD prevalence, CVD risk factors, rural area, urban area
Procedia PDF Downloads 7922655 Cognitive Function and Coping Behavior in the Elderly: A Population-Based Cross-Sectional Study
Authors: Ryo Shikimoto, Hidehito Niimura, Hisashi Kida, Kota Suzuki, Yukiko Miyasaka, Masaru Mimura
Abstract:
Introduction: In Japan, the most aged country in the world, it is important to explore predictive factors of cognitive function among the elderly. Coping behavior relieves chronic stress and improves lifestyle, and consequently may reduce the risk of cognitive impairment. One of the most widely investigated frameworks evaluated in previous studies is approach-oriented and avoidance-oriented coping strategies. The purpose of this study is to investigate the relationship between cognitive function and coping strategies among elderly residents in urban areas of Japan. Method: This is a part of the cross-sectional Arakawa geriatric cohort study for 1,099 residents (aged 65 to 86 years; mean [SD] = 72.9 [5.2]). Participants were assessed for cognitive function using the Mini-Mental State Examination (MMSE) and diagnosed by psychiatrists in face-to-face interviews. They were then investigated for their each coping behaviors and coping strategies (approach- and avoidance-oriented coping) using stress and coping inventory. A multiple regression analysis was used to investigate the relationship between MMSE score and each coping strategy. Results: Of the 1,099 patients, the mean MMSE score of the study participants was 27.2 (SD = 2.7), and the numbers of the diagnosis of normal, mild cognitive impairment (MCI), and dementia were 815 (74.2%), 248 (22.6%), and 14 (1.3%), respectively. Approach-oriented coping score was significantly associated with MMSE score (B [partial regression coefficient] = 0.12, 95% confidence interval = 0.05 to 0.19) after adjusting for confounding factors including age, sex, and education. Avoidance-oriented coping did not show a significant association with MMSE score (B [partial regression coefficient] = -0.02, 95% confidence interval = -0.09 to 0.06). Conclusion: Approach-oriented coping was clearly associated with neurocognitive function in the Japanese population. A future longitudinal trial is warranted to investigate the protective effects of coping behavior on cognitive function.Keywords: approach-oriented coping, cognitive impairment, coping behavior, dementia
Procedia PDF Downloads 12922654 Optimization of the Transfer Molding Process by Implementation of Online Monitoring Techniques for Electronic Packages
Authors: Burcu Kaya, Jan-Martin Kaiser, Karl-Friedrich Becker, Tanja Braun, Klaus-Dieter Lang
Abstract:
Quality of the molded packages is strongly influenced by the process parameters of the transfer molding. To achieve a better package quality and a stable transfer molding process, it is necessary to understand the influence of the process parameters on the package quality. This work aims to comprehend the relationship between the process parameters, and to identify the optimum process parameters for the transfer molding process in order to achieve less voids and wire sweep. To achieve this, a DoE is executed for process optimization and a regression analysis is carried out. A systematic approach is represented to generate models which enable an estimation of the number of voids and wire sweep. Validation experiments are conducted to verify the model and the results are presented.Keywords: dielectric analysis, electronic packages, epoxy molding compounds, transfer molding process
Procedia PDF Downloads 38222653 Experimental Study on Performance of a Planar Membrane Humidifier for a Proton Exchange Membrane Fuel Cell Stack
Authors: Chen-Yu Chen, Wei-Mon Yan, Chi-Nan Lai, Jian-Hao Su
Abstract:
The proton exchange membrane fuel cell (PEMFC) becomes more important as an alternative energy source recently. Maintaining proper water content in the membrane is one of the key requirements for optimizing the PEMFC performance. The planar membrane humidifier has the advantages of simple structure, low cost, low-pressure drop, light weight, reliable performance and good gas separability. Thus, it is a common external humidifier for PEMFCs. In this work, a planar membrane humidifier for kW-scale PEMFCs is developed successfully. The heat and mass transfer of humidifier is discussed, and its performance is analyzed in term of dew point approach temperature (DPAT), water vapor transfer rate (WVTR) and water recovery ratio (WRR). The DPAT of the humidifier with the counter flow approach reaches about 6°C under inlet dry air of 50°C and 60% RH and inlet humid air of 70°C and 100% RH. The rate of pressure loss of the humidifier is 5.0×10² Pa/min at the torque of 7 N-m, which reaches the standard of commercial planar membrane humidifiers. From the tests, it is found that increasing the air flow rate increases the WVTR. However, the DPAT and the WRR are not improved by increasing the WVTR as the air flow rate is higher than the optimal value. In addition, increasing the inlet temperature or the humidity of dry air decreases the WVTR and the WRR. Nevertheless, the DPAT is improved at elevated inlet temperatures or humidities of dry air. Furthermore, the performance of the humidifier with the counter flow approach is better than that with the parallel flow approach. The DPAT difference between the two flow approaches reaches up to 8 °C.Keywords: heat and mass transfer, humidifier performance, PEM fuel cell, planar membrane humidifier
Procedia PDF Downloads 30722652 Artificial Intelligence Aided Improvement in Canada's Supply Chain Management
Authors: Mohammad Talebi
Abstract:
Supply chain administration could be a concern for all the countries within the world, whereas there's no special approach towards supportability. Generally, for one decade, manufactured insights applications in keen supply chains have found a key part. In this paper, applications of artificial intelligence in supply chain management have been clarified, and towards Canadian plans for smart supply chain management (SCM), a few notes have been suggested. A hierarchical framework for smart SCM might provide a great roadmap for decision-makers to find the most appropriate approach toward smart SCM. Within the system of decision-making, all the levels included in the accomplishment of smart SCM are included. In any case, more considerations are got to be paid to available and needed infrastructures.Keywords: smart SCM, AI, SSCM, procurement
Procedia PDF Downloads 8822651 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs
Authors: Anika Chebrolu
Abstract:
Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.Keywords: drug design, multitargeticity, de-novo, reinforcement learning
Procedia PDF Downloads 9722650 Microseismicity of the Tehran Region Based on Three Seismic Networks
Authors: Jamileh Vasheghani Farahani
Abstract:
The main purpose of this research is to show the current active faults and active tectonic of the area by three seismic networks in Tehran region: 1-Tehran Disaster Mitigation and Management Organization (TDMMO), 2-Broadband Iranian National Seismic Network Center (BIN), 3-Iranian Seismological Center (IRSC). In this study, we analyzed microearthquakes happened in Tehran city and its surroundings using the Tehran networks from 1996 to 2015. We found some active faults and trends in the region. There is a 200-year history of historical earthquakes in Tehran. Historical and instrumental seismicity show that the east of Tehran is more active than the west. The Mosha fault in the North of Tehran is one of the active faults of the central Alborz. Moreover, other major faults in the region are Kahrizak, Eyvanakey, Parchin and North Tehran faults. An important seismicity region is an intersection of the Mosha and North Tehran fault systems (Kalan village in Lavasan). This region shows a cluster of microearthquakes. According to the historical and microseismic events analyzed in this research, there is a seismic gap in SE of Tehran. The empirical relationship is used to assess the Mmax based on the rupture length. There is a probability of occurrence of a strong motion of 7.0 to 7.5 magnitudes in the region (based on the assessed capability of the major faults such as Parchin and Eyvanekey faults and historical earthquakes).Keywords: Iran, major faults, microseismicity, Tehran
Procedia PDF Downloads 36522649 Multiscale Simulation of Absolute Permeability in Carbonate Samples Using 3D X-Ray Micro Computed Tomography Images Textures
Authors: M. S. Jouini, A. Al-Sumaiti, M. Tembely, K. Rahimov
Abstract:
Characterizing rock properties of carbonate reservoirs is highly challenging because of rock heterogeneities revealed at several length scales. In the last two decades, the Digital Rock Physics (DRP) approach was implemented successfully in sandstone rocks reservoirs in order to understand rock properties behaviour at the pore scale. This approach uses 3D X-ray Microtomography images to characterize pore network and also simulate rock properties from these images. Even though, DRP is able to predict realistic rock properties results in sandstone reservoirs it is still suffering from a lack of clear workflow in carbonate rocks. The main challenge is the integration of properties simulated at different scales in order to obtain the effective rock property of core plugs. In this paper, we propose several approaches to characterize absolute permeability in some carbonate core plugs samples using multi-scale numerical simulation workflow. In this study, we propose a procedure to simulate porosity and absolute permeability of a carbonate rock sample using textures of Micro-Computed Tomography images. First, we discretize X-Ray Micro-CT image into a regular grid. Then, we use a textural parametric model to classify each cell of the grid using supervised classification. The main parameters are first and second order statistics such as mean, variance, range and autocorrelations computed from sub-bands obtained after wavelet decomposition. Furthermore, we fill permeability property in each cell using two strategies based on numerical simulation values obtained locally on subsets. Finally, we simulate numerically the effective permeability using Darcy’s law simulator. Results obtained for studied carbonate sample shows good agreement with the experimental property.Keywords: multiscale modeling, permeability, texture, micro-tomography images
Procedia PDF Downloads 18322648 Comparison of Crossover Types to Obtain Optimal Queries Using Adaptive Genetic Algorithm
Authors: Wafa’ Alma'Aitah, Khaled Almakadmeh
Abstract:
this study presents an information retrieval system of using genetic algorithm to increase information retrieval efficiency. Using vector space model, information retrieval is based on the similarity measurement between query and documents. Documents with high similarity to query are judge more relevant to the query and should be retrieved first. Using genetic algorithms, each query is represented by a chromosome; these chromosomes are fed into genetic operator process: selection, crossover, and mutation until an optimized query chromosome is obtained for document retrieval. Results show that information retrieval with adaptive crossover probability and single point type crossover and roulette wheel as selection type give the highest recall. The proposed approach is verified using (242) proceedings abstracts collected from the Saudi Arabian national conference.Keywords: genetic algorithm, information retrieval, optimal queries, crossover
Procedia PDF Downloads 29222647 Objective-Based System Dynamics Modeling to Forecast the Number of Health Professionals in Pudong New Area of Shanghai
Authors: Jie Ji, Jing Xu, Yuehong Zhuang, Xiangqing Kang, Ying Qian, Ping Zhou, Di Xue
Abstract:
Background: In 2014, there were 28,341 health professionals in Pudong new area of Shanghai and the number per 1000 population was 5.199, 55.55% higher than that in 2006. But it was always less than the average number of health professionals per 1000 population in Shanghai from 2006 to 2014. Therefore, allocation planning for the health professionals in Pudong new area has become a high priority task in order to meet the future demands of health care. In this study, we constructed an objective-based system dynamics model to forecast the number of health professionals in Pudong new area of Shanghai in 2020. Methods: We collected the data from health statistics reports and previous survey of human resources in Pudong new area of Shanghai. Nine experts, who were from health administrative departments, public hospitals and community health service centers, were consulted to estimate the current and future status of nine variables used in the system dynamics model. Based on the objective of the number of health professionals per 1000 population (8.0) in Shanghai for 2020, the system dynamics model for health professionals in Pudong new area of Shanghai was constructed to forecast the number of health professionals needed in Pudong new area in 2020. Results: The system dynamics model for health professionals in Pudong new area of Shanghai was constructed. The model forecasted that there will be 37,330 health professionals (6.433 per 1000 population) in 2020. If the success rate of health professional recruitment changed from 20% to 70%, the number of health professionals per 1000 population would be changed from 5.269 to 6.919. If this rate changed from 20% to 70% and the success rate of building new beds changed from 5% to 30% at the same time, the number of health professionals per 1000 population would be changed from 5.269 to 6.923. Conclusions: The system dynamics model could be used to simulate and forecast the health professionals. But, if there were no significant changes in health policies and management system, the number of health professionals per 1000 population would not reach the objectives in Pudong new area in 2020.Keywords: allocation planning, forecast, health professional, system dynamics
Procedia PDF Downloads 38622646 A Neural Network Approach to Evaluate Supplier Efficiency in a Supply Chain
Authors: Kishore K. Pochampally
Abstract:
The success of a supply chain heavily relies on the efficiency of the suppliers involved. In this paper, we propose a neural network approach to evaluate the efficiency of a supplier, which is being considered for inclusion in a supply chain, using the available linguistic (fuzzy) data of suppliers that already exist in the supply chain. The approach is carried out in three phases, as follows: In phase one, we identify criteria for evaluation of the supplier of interest. Then, in phase two, we use performance measures of already existing suppliers to construct a neural network that gives weights (importance values) of criteria identified in phase one. Finally, in phase three, we calculate the overall rating of the supplier of interest. The following are the major findings of the research conducted for this paper: (i) linguistic (fuzzy) ratings of suppliers such as 'good', 'bad', etc., can be converted (defuzzified) to numerical ratings (1 – 10 scale) using fuzzy logic so that those ratings can be used for further quantitative analysis; (ii) it is possible to construct and train a multi-level neural network in order to determine the weights of the criteria that are used to evaluate a supplier; and (iii) Borda’s rule can be used to group the weighted ratings and calculate the overall efficiency of the supplier.Keywords: fuzzy data, neural network, supplier, supply chain
Procedia PDF Downloads 11422645 A Model of Teacher Leadership in History Instruction
Authors: Poramatdha Chutimant
Abstract:
The objective of the research was to propose a model of teacher leadership in history instruction for utilization. Everett M. Rogers’ Diffusion of Innovations Theory is applied as theoretical framework. Qualitative method is to be used in the study, and the interview protocol used as an instrument to collect primary data from best practices who awarded by Office of National Education Commission (ONEC). Open-end questions will be used in interview protocol in order to gather the various data. Then, information according to international context of history instruction is the secondary data used to support in the summarizing process (Content Analysis). Dendrogram is a key to interpret and synthesize the primary data. Thus, secondary data comes as the supportive issue in explanation and elaboration. In-depth interview is to be used to collected information from seven experts in educational field. The focal point is to validate a draft model in term of future utilization finally.Keywords: history study, nationalism, patriotism, responsible citizenship, teacher leadership
Procedia PDF Downloads 28022644 Experimental Modeling of Spray and Water Sheet Formation Due to Wave Interactions with Vertical and Slant Bow-Shaped Model
Authors: Armin Bodaghkhani, Bruce Colbourne, Yuri S. Muzychka
Abstract:
The process of spray-cloud formation and flow kinematics produced from breaking wave impact on vertical and slant lab-scale bow-shaped models were experimentally investigated. Bubble Image Velocimetry (BIV) and Image Processing (IP) techniques were applied to study the various types of wave-model impacts. Different wave characteristics were generated in a tow tank to investigate the effects of wave characteristics, such as wave phase velocity, wave steepness on droplet velocities, and behavior of the process of spray cloud formation. The phase ensemble-averaged vertical velocity and turbulent intensity were computed. A high-speed camera and diffused LED backlights were utilized to capture images for further post processing. Various pressure sensors and capacitive wave probes were used to measure the wave impact pressure and the free surface profile at different locations of the model and wave-tank, respectively. Droplet sizes and velocities were measured using BIV and IP techniques to trace bubbles and droplets in order to measure their velocities and sizes by correlating the texture in these images. The impact pressure and droplet size distributions were compared to several previously experimental models, and satisfactory agreements were achieved. The distribution of droplets in front of both models are demonstrated. Due to the highly transient process of spray formation, the drag coefficient for several stages of this transient displacement for various droplet size ranges and different Reynolds number were calculated based on the ensemble average method. From the experimental results, the slant model produces less spray in comparison with the vertical model, and the droplet velocities generated from the wave impact with the slant model have a lower velocity as compared with the vertical model.Keywords: spray charachteristics, droplet size and velocity, wave-body interactions, bubble image velocimetry, image processing
Procedia PDF Downloads 30022643 New Chances of Reforming Pedagogical Approach In Secondary English Class in China under the New English Curriculum and National College Entrance Examination Reform
Authors: Yue Wang
Abstract:
Five years passed since the newest English curriculum reform policy was published in China, hand-wringing spread among teachers who accused that this is another 'Wearing New Shoes to Walk the Old Road' policy. This paper provides a thoroughly philosophical policy analysis of serious efforts that had been made to support this reform and reveals the hindrances that bridled the reform to yield the desired effect. Blame could be easily put on teachers for their insufficient pedagogical content knowledge, conservative resistance, and the handicaps of large class sizes and limited teaching times, and so on. However, the underlying causes for this implementation failure are the interrelated factors in the NCEE-centred education system, such as the reluctant from students, the lack of school and education bureau support, and insufficient teacher training. A further discussion of 2017 to 2020’s NCEE reform on English prompt new possibilities for the authentic pedagogical approach reform in secondary English classes. In all, the pedagogical approach reform at the secondary level is heading towards a brighter future with the initiation of new NCEE reform.Keywords: English curriculum, failure, NCEE, new possibilities, pedagogical, policy analysis, reform
Procedia PDF Downloads 14122642 Mathematical Modeling for Diabetes Prediction: A Neuro-Fuzzy Approach
Authors: Vijay Kr. Yadav, Nilam Rathi
Abstract:
Accurate prediction of glucose level for diabetes mellitus is required to avoid affecting the functioning of major organs of human body. This study describes the fundamental assumptions and two different methodologies of the Blood glucose prediction. First is based on the back-propagation algorithm of Artificial Neural Network (ANN), and second is based on the Neuro-Fuzzy technique, called Fuzzy Inference System (FIS). Errors between proposed methods further discussed through various statistical methods such as mean square error (MSE), normalised mean absolute error (NMAE). The main objective of present study is to develop mathematical model for blood glucose prediction before 12 hours advanced using data set of three patients for 60 days. The comparative studies of the accuracy with other existing models are also made with same data set.Keywords: back-propagation, diabetes mellitus, fuzzy inference system, neuro-fuzzy
Procedia PDF Downloads 25722641 Using Time Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa
Authors: Adesuyi Ayodeji Steve, Zahn Munch
Abstract:
This study investigates the use of MODIS NDVI to identify agricultural land cover change areas on an annual time step (2007 - 2012) and characterize the trend in the study area. An ISODATA classification was performed on the MODIS imagery to select only the agricultural class producing 3 class groups namely: agriculture, agriculture/semi-natural, and semi-natural. NDVI signatures were created for the time series to identify areas dominated by cereals and vineyards with the aid of ancillary, pictometry and field sample data. The NDVI signature curve and training samples aided in creating a decision tree model in WEKA 3.6.9. From the training samples two classification models were built in WEKA using decision tree classifier (J48) algorithm; Model 1 included ISODATA classification and Model 2 without, both having accuracies of 90.7% and 88.3% respectively. The two models were used to classify the whole study area, thus producing two land cover maps with Model 1 and 2 having classification accuracies of 77% and 80% respectively. Model 2 was used to create change detection maps for all the other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices over the years as predicted by the land cover classification. 41% of the catchment comprises of cereals with 35% possibly following a crop rotation system. Vineyard largely remained constant over the years, with some conversion to vineyard (1%) from other land cover classes. Some of the changes might be as a result of misclassification and crop rotation system.Keywords: change detection, land cover, modis, NDVI
Procedia PDF Downloads 40222640 An Information-Based Approach for Preference Method in Multi-Attribute Decision Making
Authors: Serhat Tuzun, Tufan Demirel
Abstract:
Multi-Criteria Decision Making (MCDM) is the modelling of real-life to solve problems we encounter. It is a discipline that aids decision makers who are faced with conflicting alternatives to make an optimal decision. MCDM problems can be classified into two main categories: Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), based on the different purposes and different data types. Although various MADM techniques were developed for the problems encountered, their methodology is limited in modelling real-life. Moreover, objective results are hard to obtain, and the findings are generally derived from subjective data. Although, new and modified techniques are developed by presenting new approaches such as fuzzy logic; comprehensive techniques, even though they are better in modelling real-life, could not find a place in real world applications for being hard to apply due to its complex structure. These constraints restrict the development of MADM. This study aims to conduct a comprehensive analysis of preference methods in MADM and propose an approach based on information. For this purpose, a detailed literature review has been conducted, current approaches with their advantages and disadvantages have been analyzed. Then, the approach has been introduced. In this approach, performance values of the criteria are calculated in two steps: first by determining the distribution of each attribute and standardizing them, then calculating the information of each attribute as informational energy.Keywords: literature review, multi-attribute decision making, operations research, preference method, informational energy
Procedia PDF Downloads 22422639 A Prediction Model Using the Price Cyclicality Function Optimized for Algorithmic Trading in Financial Market
Authors: Cristian Păuna
Abstract:
After the widespread release of electronic trading, automated trading systems have become a significant part of the business intelligence system of any modern financial investment company. An important part of the trades is made completely automatically today by computers using mathematical algorithms. The trading decisions are taken almost instantly by logical models and the orders are sent by low-latency automatic systems. This paper will present a real-time price prediction methodology designed especially for algorithmic trading. Based on the price cyclicality function, the methodology revealed will generate price cyclicality bands to predict the optimal levels for the entries and exits. In order to automate the trading decisions, the cyclicality bands will generate automated trading signals. We have found that the model can be used with good results to predict the changes in market behavior. Using these predictions, the model can automatically adapt the trading signals in real-time to maximize the trading results. The paper will reveal the methodology to optimize and implement this model in automated trading systems. After tests, it is proved that this methodology can be applied with good efficiency in different timeframes. Real trading results will be also displayed and analyzed in order to qualify the methodology and to compare it with other models. As a conclusion, it was found that the price prediction model using the price cyclicality function is a reliable trading methodology for algorithmic trading in the financial market.Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, price prediction
Procedia PDF Downloads 18422638 Qualitative Analysis of User Experiences and Needs for Educational Chatbots in Higher Education
Authors: Felix Golla
Abstract:
In an era where technology increasingly intersects with education, the potential of chatbots and ChatGPT agents in enhancing student learning experiences in higher education is both significant and timely. This study explores the integration of these AI-driven tools in educational settings, emphasizing their design and functionality to meet the specific needs of students. Recognizing the gap in literature concerning student-centered AI applications in education, this research offers valuable insights into the role and efficacy of chatbots and ChatGPT agents as educational tools. Employing qualitative research methodologies, the study involved conducting semi-structured interviews with university students. These interviews were designed to gather in-depth insights into the students' experiences and expectations regarding the use of AI in learning environments. The High-Performance Cycle Model, renowned for its focus on goal setting and motivation, served as the theoretical framework guiding the analysis. This model helped in systematically categorizing and interpreting the data, revealing the nuanced perceptions and preferences of students regarding AI tools in education. The major findings of the study indicate a strong preference among students for chatbots and ChatGPT agents that offer personalized interaction, adaptive learning support, and regular, constructive feedback. These features were deemed essential for enhancing student engagement, motivation, and overall learning outcomes. Furthermore, the study revealed that students perceive these AI tools not just as passive sources of information but as active facilitators in the learning process, capable of adapting to individual learning styles and needs. In conclusion, this study underscores the transformative potential of chatbots and ChatGPT agents in higher education. It highlights the need for these AI tools to be designed with a student-centered approach, ensuring their alignment with educational objectives and student preferences. The findings contribute to the evolving discourse on AI in education, suggesting a paradigm shift towards more interactive, responsive, and personalized learning experiences. This research not only informs educators and technologists about the desirable features of educational chatbots but also opens avenues for future studies to explore the long-term impact of AI integration in academic curricula.Keywords: chatbot design in education, high-performance cycle model application, qualitative research in AI, student-centered learning technologies
Procedia PDF Downloads 6922637 Direct Approach in Modeling Particle Breakage Using Discrete Element Method
Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang
Abstract:
Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.Keywords: particle bed, breakage models, breakage kinetic, discrete element method
Procedia PDF Downloads 19922636 A Collaborative Problem Driven Approach to Design an HR Analytics Application
Authors: L. Atif, C. Rosenthal-Sabroux, M. Grundstein
Abstract:
The requirements engineering process is a crucial phase in the design of complex systems. The purpose of our research is to present a collaborative problem-driven requirements engineering approach that aims at improving the design of a Decision Support System as an Analytics application. This approach has been adopted to design a Human Resource management DSS. The Requirements Engineering process is presented as a series of guidelines for activities that must be implemented to assure that the final product satisfies end-users requirements and takes into account the limitations identified. For this, we know that a well-posed statement of the problem is “a problem whose crucial character arises from collectively produced estimation and a formulation found to be acceptable by all the parties”. Moreover, we know that DSSs were developed to help decision-makers solve their unstructured problems. So, we thus base our research off of the assumption that developing DSS, particularly for helping poorly structured or unstructured decisions, cannot be done without considering end-user decision problems, how to represent them collectively, decisions content, their meaning, and the decision-making process; thus, arise the field issues in a multidisciplinary perspective. Our approach addresses a problem-driven and collaborative approach to designing DSS technologies: It will reflect common end-user problems in the upstream design phase and in the downstream phase these problems will determine the design choices and potential technical solution. We will thus rely on a categorization of HR’s problems for a development mirroring the Analytics solution. This brings out a new data-driven DSS typology: Descriptive Analytics, Explicative or Diagnostic Analytics, Predictive Analytics, Prescriptive Analytics. In our research, identifying the problem takes place with design of the solution, so, we would have to resort a significant transformations of representations associated with the HR Analytics application to build an increasingly detailed representation of the goal to be achieved. Here, the collective cognition is reflected in the establishment of transfer functions of representations during the whole of the design process.Keywords: DSS, collaborative design, problem-driven requirements, analytics application, HR decision making
Procedia PDF Downloads 29522635 Modeling of System Availability and Bayesian Analysis of Bivariate Distribution
Authors: Muhammad Farooq, Ahtasham Gul
Abstract:
To meet the desired standard, it is important to monitor and analyze different engineering processes to get desired output. The bivariate distributions got a lot of attention in recent years to describe the randomness of natural as well as artificial mechanisms. In this article, a bivariate model is constructed using two independent models developed by the nesting approach to study the effect of each component on reliability for better understanding. Further, the Bayes analysis of system availability is studied by considering prior parametric variations in the failure time and repair time distributions. Basic statistical characteristics of marginal distribution, like mean median and quantile function, are discussed. We use inverse Gamma prior to study its frequentist properties by conducting Monte Carlo Markov Chain (MCMC) sampling scheme.Keywords: reliability, system availability Weibull, inverse Lomax, Monte Carlo Markov Chain, Bayesian
Procedia PDF Downloads 7122634 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 18322633 The Extension of the Kano Model by the Concept of Over-Service
Authors: Lou-Hon Sun, Yu-Ming Chiu, Chen-Wei Tao, Chia-Yun Tsai
Abstract:
It is common practice for many companies to ask employees to provide heart-touching service for customers and to emphasize the attitude of 'customer first'. However, services may not necessarily gain praise, and may actually be considered excessive, if customers do not appreciate such behaviors. In reality, many restaurant businesses try to provide as much service as possible without taking into account whether over-provision may lead to negative customer reception. A survey of 894 people in Britain revealed that 49 percent of respondents consider over-attentive waiters the most annoying aspect of dining out. It can be seen that merely aiming to exceed customers’ expectations without actually addressing their needs, only further distances and dissociates the standard of services from the goals of customer satisfaction itself. Over-service is defined, as 'service provided that exceeds customer expectations, or simply that customers deemed redundant, resulting in negative perception'. It was found that customers’ reactions and complaints concerning over-service are not as intense as those against service failures caused by the inability to meet expectations; consequently, it is more difficult for managers to become aware of the existence of over-service. Thus the ability to manage over-service behaviors is a significant topic for consideration. The Kano model classifies customer preferences into five categories: attractive quality attribute, one-dimensional quality attribute, must-be quality attribute, indifferent quality attribute and reverse quality attributes. The model is still very popular for researchers to explore the quality aspects and customer satisfaction. Nevertheless, several studies indicated that Kano’s model could not fully capture the nature of service quality. The concept of over-service can be used to restructure the model and provide a better understanding of the service quality construct. In this research, the structure of Kano's two-dimensional questionnaire will be used to classify the factors into different dimensions. The same questions will be used in the second questionnaire for identifying the over-service experienced of the respondents. The finding of these two questionnaires will be used to analyze the relevance between service quality classification and over-service behaviors. The subjects of this research are customers of fine dining chain restaurants. Three hundred questionnaires will be issued based on the stratified random sampling method. Items for measurement will be derived from DINESERV scale. The tangible dimension of the questionnaire will be eliminated due to this research is focused on the employee behaviors. Quality attributes of the Kano model are often regarded as an instrument for improving customer satisfaction. The concept of over-service can be used to restructure the model and provide a better understanding of service quality construct. The extension of the Kano model will not only develop a better understanding of customer needs and expectations but also enhance the management of service quality.Keywords: consumer satisfaction, DINESERV, kano model, over-service
Procedia PDF Downloads 16122632 Bayesian Value at Risk Forecast Using Realized Conditional Autoregressive Expectiel Mdodel with an Application of Cryptocurrency
Authors: Niya Chen, Jennifer Chan
Abstract:
In the financial market, risk management helps to minimize potential loss and maximize profit. There are two ways to assess risks; the first way is to calculate the risk directly based on the volatility. The most common risk measurements are Value at Risk (VaR), sharp ratio, and beta. Alternatively, we could look at the quantile of the return to assess the risk. Popular return models such as GARCH and stochastic volatility (SV) focus on modeling the mean of the return distribution via capturing the volatility dynamics; however, the quantile/expectile method will give us an idea of the distribution with the extreme return value. It will allow us to forecast VaR using return which is direct information. The advantage of using these non-parametric methods is that it is not bounded by the distribution assumptions from the parametric method. But the difference between them is that expectile uses a second-order loss function while quantile regression uses a first-order loss function. We consider several quantile functions, different volatility measures, and estimates from some volatility models. To estimate the expectile of the model, we use Realized Conditional Autoregressive Expectile (CARE) model with the bayesian method to achieve this. We would like to see if our proposed models outperform existing models in cryptocurrency, and we will test it by using Bitcoin mainly as well as Ethereum.Keywords: expectile, CARE Model, CARR Model, quantile, cryptocurrency, Value at Risk
Procedia PDF Downloads 10922631 Shaping Traditional Chinese Culture in Contemporary Fashion: ‘Guochao’ as a Rising Aesthetic and the Case Study of the Designer Brand Angel Chen
Authors: Zhe Ginnie Wang
Abstract:
Recent cultural design studies have begun to shed light on the discussion of Western-Eastern cultural and aesthetic hybridization, especially in the field of fashion. With the unprecedented spread of cultural Chinese fashion design in the global fashion system, the under-identified ‘Guochao’ aesthetic that has emerged in the global market needs to be academically emphasized with a methodological approach looking at the Western-Eastern cultural hybridization present in fashion visualization. Through an in-depth and comprehensive investigation of a representative international-based Chinese designer, Angel Chen's fashion show 'Madam Qing', this paper provides a methodological approach on how a form of traditional culture can be effectively extracted and applied to modern design using the most effective techniques. The central approach examined in this study involves creating aesthetic revolutions by addressing Chinese cultural identity through re-creating and modernizing traditional Chinese culture in design.Keywords: style modernization, Chinese culture, guochao, design identity, fashion show, Angel Chen
Procedia PDF Downloads 35622630 Boundary Feedback Stabilization of an Overhead Crane Model
Authors: Abdelhadi Elharfi
Abstract:
A problem of boundary feedback (exponential) stabilization of an overhead crane model represented by a PDE is considered. For any $r>0$, the exponential stability at the desired decay rate $r$ is solved in semi group setting by a collocated-type stabiliser of a target system combined with a term involving the solution of an appropriate PDE.Keywords: feedback stabilization, semi group and generator, overhead crane system
Procedia PDF Downloads 40622629 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework
Authors: Nicola Rubino
Abstract:
This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points
Procedia PDF Downloads 278