Search results for: Doyle-Fuller-Newman battery model
10858 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: active contour, bayesian, echocardiographic image, feature vector
Procedia PDF Downloads 42010857 Training AI to Be Empathetic and Determining the Psychotype of a Person During a Conversation with a Chatbot
Authors: Aliya Grig, Konstantin Sokolov, Igor Shatalin
Abstract:
The report describes the methodology for collecting data and building an ML model for determining the personality psychotype using profiling and personality traits methods based on several short messages of a user communicating on an arbitrary topic with a chitchat bot. In the course of the experiments, the minimum amount of text was revealed to confidently determine aspects of personality. Model accuracy - 85%. Users' language of communication is English. AI for a personalized communication with a user based on his mood, personality, and current emotional state. Features investigated during the research: personalized communication; providing empathy; adaptation to a user; predictive analytics. In the report, we describe the processes that captures both structured and unstructured data pertaining to a user in large quantities and diverse forms. This data is then effectively processed through ML tools to construct a knowledge graph and draw inferences regarding users of text messages in a comprehensive manner. Specifically, the system analyzes users' behavioral patterns and predicts future scenarios based on this analysis. As a result of the experiments, we provide for further research on training AI models to be empathetic, creating personalized communication for a userKeywords: AI, empathetic, chatbot, AI models
Procedia PDF Downloads 9210856 Application of Adaptive Neuro Fuzzy Inference Systems Technique for Modeling of Postweld Heat Treatment Process of Pressure Vessel Steel AASTM A516 Grade 70
Authors: Omar Al Denali, Abdelaziz Badi
Abstract:
The ASTM A516 Grade 70 steel is a suitable material used for the fabrication of boiler pressure vessels working in moderate and lower temperature services, and it has good weldability and excellent notch toughness. The post-weld heat treatment (PWHT) or stress-relieving heat treatment has significant effects on avoiding the martensite transformation and resulting in high hardness, which can lead to cracking in the heat-affected zone (HAZ). An adaptive neuro-fuzzy inference system (ANFIS) was implemented to predict the material tensile strength of post-weld heat treatment (PWHT) experiments. The ANFIS models presented excellent predictions, and the comparison was carried out based on the mean absolute percentage error between the predicted values and the experimental values. The ANFIS model gave a Mean Absolute Percentage Error of 0.556 %, which confirms the high accuracy of the model.Keywords: prediction, post-weld heat treatment, adaptive neuro-fuzzy inference system, mean absolute percentage error
Procedia PDF Downloads 15310855 Positive Psychology and the Social Emotional Ability Instrument (SEAI)
Authors: Victor William Harris
Abstract:
This research is a validation study of the Social Emotional Ability Inventory (SEAI), a multi-dimensional self-report instrument informed by positive psychology, emotional intelligence, social intelligence, and sociocultural learning theory. Designed for use in tandem with the Social Emotional Development (SEAD) theoretical model, the SEAI provides diagnostic-level guidance for professionals and individuals interested in investigating, identifying, and understanding social, emotional strengths, as well as remediating specific social competency deficiencies. The SEAI was shown to be psychometrically sound, exhibited strong internal reliability, and supported the a priori hypotheses of the SEAD. Additionally, confirmatory factor analysis provided evidence of goodness of fit, convergent and divergent validity, and supported a theoretical model that reflected SEAD expectations. The SEAI and SEAD hold potentially far-reaching and important practical implications for theoretical guidance and diagnostic-level measurement of social, emotional competency across a wide range of domains. Strategies researchers, practitioners, educators, and individuals might use to deploy SEAI in order to improve quality of life outcomes are discussed.Keywords: emotion, emotional ability, positive psychology-social emotional ability, social emotional ability, social emotional ability instrument
Procedia PDF Downloads 25610854 A Model of Human Security: A Comparison of Vulnerabilities and Timespace
Authors: Anders Troedsson
Abstract:
For us humans, risks are intimately linked to human vulnerabilities - where there is vulnerability, there is potentially insecurity, and risk. Reducing vulnerability through compensatory measures means increasing security and decreasing risk. The paper suggests that a meaningful way to approach the study of risks (including threats, assaults, crisis etc.), is to understand the vulnerabilities these external phenomena evoke in humans. As is argued, the basis of risk evaluation, as well as responses, is the more or less subjective perception by the individual person, or a group of persons, exposed to the external event or phenomena in question. This will be determined primarily by the vulnerability or vulnerabilities that the external factor are perceived to evoke. In this way, risk perception is primarily an inward dynamic, rather than an outward one. Therefore, a route towards an understanding of the perception of risks, is a closer scrutiny of the vulnerabilities which they can evoke, thereby approaching an understanding of what in the paper is called the essence of risk (including threat, assault etc.), or that which a certain perceived risk means to an individual or group of individuals. As a necessary basis for gauging the wide spectrum of potential risks and their meaning, the paper proposes a model of human vulnerabilities, drawing from i.a. a long tradition of needs theory. In order to account for the subjectivity factor, which mediates between the innate vulnerabilities on the one hand, and the event or phenomenon out there on the other hand, an ensuing ontological discussion about the timespace characteristics of risk/threat/assault as perceived by humans leads to the positing of two dimensions. These two dimensions are applied on the vulnerabilities, resulting in a modelling effort featuring four realms of vulnerabilities which are related to each other and together represent a dynamic whole. In approaching the problem of risk perception, the paper thus defines the relevant realms of vulnerabilities, depicting them as a dynamic whole. With reference to a substantial body of literature and a growing international policy trend since the 1990s, this model is put in the language of human security - a concept relevant not only for international security studies and policy, but also for other academic disciplines and spheres of human endeavor.Keywords: human security, timespace, vulnerabilities, risk perception
Procedia PDF Downloads 33610853 Compromising Relevance for Elegance: A Danger of Dominant Growth Models for Backward Economies
Authors: Givi Kupatadze
Abstract:
Backward economies are facing a challenge of achieving sustainable high economic growth rate. Dominant growth models represent a roadmap in framing economic development strategy. This paper examines a relevance of the dominant growth models for backward economies. Cobb-Douglas production function, the Harrod-Domar model of economic growth, the Solow growth model and general formula of gross domestic product are examined to undertake a comprehensive study of the dominant growth models. Deductive research method allows to uncover major weaknesses of the dominant growth models and to come up with practical implications for economic development strategy. The key finding of the paper shows, contrary to what used to be taught by textbooks of economics, that constant returns to scale property of the dominant growth models are a mere coincidence and its generalization over space and time can be regarded as one of the most unfortunate mistakes in the whole field of political economy. The major suggestion of the paper for backward economies is that understanding and considering taxonomy of economic activities based on increasing and diminishing returns to scale represent a cornerstone of successful economic development strategy.Keywords: backward economies, constant returns to scale, dominant growth models, taxonomy of economic activities
Procedia PDF Downloads 37510852 A Collaborative Learning Model in Engineering Science Based on a Cyber-Physical Production Line
Authors: Yosr Ghozzi
Abstract:
The Cyber-Physical Systems terminology has been well received by the industrial community and specifically appropriated in educational settings. Indeed, our latest educational activities are based on the development of experimental platforms on an industrial scale. In fact, we built a collaborative learning model because of an international market study that led us to place ourselves at the heart of this technology. To align with these findings, a competency-based approach study was conducted, and program content was revised by reflecting the projectbased approach. Thus, this article deals with the development of educational devices according to a generated curriculum and specific educational activities while respecting the repository of skills adopted from what constitutes the educational cyber-physical production systems and the laboratories that are compliant and adapted to them. The implementation of these platforms was systematically carried out in the school's workshops spaces. The objective has been twofold, both research and teaching for the students in mechatronics and logistics of the electromechanical department. We act as trainers and industrial experts to involve students in the implementation of possible extension systems around multidisciplinary projects and reconnect with industrial projects for better professional integration.Keywords: education 4.0, competency-based learning, teaching factory, project-based learning, cyber-physical systems, industry 4.0
Procedia PDF Downloads 10710851 Investigation of the Physical Computing in Computational Thinking Practices, Computer Programming Concepts and Self-Efficacy for Crosscutting Ideas in STEM Content Environments
Authors: Sarantos Psycharis
Abstract:
Physical Computing, as an instructional model, is applied in the framework of the Engineering Pedagogy to teach “transversal/cross-cutting ideas” in a STEM content approach. Labview and Arduino were used in order to connect the physical world with real data in the framework of the so called Computational Experiment. Tertiary prospective engineering educators were engaged during their course and Computational Thinking (CT) concepts were registered before and after the intervention across didactic activities using validated questionnaires for the relationship between self-efficacy, computer programming, and CT concepts when STEM content epistemology is implemented in alignment with the Computational Pedagogy model. Results show a significant change in students’ responses for self-efficacy for CT before and after the instruction. Results also indicate a significant relation between the responses in the different CT concepts/practices. According to the findings, STEM content epistemology combined with Physical Computing should be a good candidate as a learning and teaching approach in university settings that enhances students’ engagement in CT concepts/practices.Keywords: arduino, computational thinking, computer programming, Labview, self-efficacy, STEM
Procedia PDF Downloads 11310850 Using Deep Learning Neural Networks and Candlestick Chart Representation to Predict Stock Market
Authors: Rosdyana Mangir Irawan Kusuma, Wei-Chun Kao, Ho-Thi Trang, Yu-Yen Ou, Kai-Lung Hua
Abstract:
Stock market prediction is still a challenging problem because there are many factors that affect the stock market price such as company news and performance, industry performance, investor sentiment, social media sentiment, and economic factors. This work explores the predictability in the stock market using deep convolutional network and candlestick charts. The outcome is utilized to design a decision support framework that can be used by traders to provide suggested indications of future stock price direction. We perform this work using various types of neural networks like convolutional neural network, residual network and visual geometry group network. From stock market historical data, we converted it to candlestick charts. Finally, these candlestick charts will be feed as input for training a convolutional neural network model. This convolutional neural network model will help us to analyze the patterns inside the candlestick chart and predict the future movements of the stock market. The effectiveness of our method is evaluated in stock market prediction with promising results; 92.2% and 92.1 % accuracy for Taiwan and Indonesian stock market dataset respectively.Keywords: candlestick chart, deep learning, neural network, stock market prediction
Procedia PDF Downloads 44710849 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data
Authors: N. Borjalilu, P. Rabiei, A. Enjoo
Abstract:
Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.Keywords: F-topsis, fuzzy set, flight data monitoring (FDM), flight safety
Procedia PDF Downloads 16810848 One-Shot Text Classification with Multilingual-BERT
Authors: Hsin-Yang Wang, K. M. A. Salam, Ying-Jia Lin, Daniel Tan, Tzu-Hsuan Chou, Hung-Yu Kao
Abstract:
Detecting user intent from natural language expression has a wide variety of use cases in different natural language processing applications. Recently few-shot training has a spike of usage on commercial domains. Due to the lack of significant sample features, the downstream task performance has been limited or leads to an unstable result across different domains. As a state-of-the-art method, the pre-trained BERT model gathering the sentence-level information from a large text corpus shows improvement on several NLP benchmarks. In this research, we are proposing a method to change multi-class classification tasks into binary classification tasks, then use the confidence score to rank the results. As a language model, BERT performs well on sequence data. In our experiment, we change the objective from predicting labels into finding the relations between words in sequence data. Our proposed method achieved 71.0% accuracy in the internal intent detection dataset and 63.9% accuracy in the HuffPost dataset. Acknowledgment: This work was supported by NCKU-B109-K003, which is the collaboration between National Cheng Kung University, Taiwan, and SoftBank Corp., Tokyo.Keywords: OSML, BERT, text classification, one shot
Procedia PDF Downloads 10110847 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning
Authors: Saahith M. S., Sivakami R.
Abstract:
In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis
Procedia PDF Downloads 3810846 Optimization of Thermopile Sensor Performance of Polycrystalline Silicon Film
Authors: Li Long, Thomas Ortlepp
Abstract:
A theoretical model for the optimization of thermopile sensor performance is developed for thermoelectric-based infrared radiation detection. It is shown that the performance of polycrystalline silicon film thermopile sensor can be optimized according to the thermoelectric quality factor, sensor layer structure factor, and sensor layout geometrical form factor. Based on the properties of electrons, phonons, grain boundaries, and their interactions, the thermoelectric quality factor of polycrystalline silicon is analyzed with the relaxation time approximation of the Boltzmann transport equation. The model includes the effect of grain structure, grain boundary trap properties, and doping concentration. The layer structure factor is analyzed with respect to the infrared absorption coefficient. The optimization of layout design is characterized by the form factor, which is calculated for different sensor designs. A double-layer polycrystalline silicon thermopile infrared sensor on a suspended membrane has been designed and fabricated with a CMOS-compatible process. The theoretical approach is confirmed by measurement results.Keywords: polycrystalline silicon, relaxation time approximation, specific detectivity, thermal conductivity, thermopile infrared sensor
Procedia PDF Downloads 13910845 Three-Dimensional, Non-Linear Finite Element Analysis of Bullet Penetration through Thin AISI 4340 Steel Target Plate
Authors: Abhishek Soni, A. Kumaraswamy, M. S. Mahesh
Abstract:
Bullet penetration in steel plate is investigated with the help of three-dimensional, non-linear, transient, dynamic, finite elements analysis using explicit time integration code LSDYNA. The effect of large strain, strain-rate and temperature at very high velocity regime was studied from number of simulations of semi-spherical nose shape bullet penetration through single layered circular plate with 2 mm thickness at impact velocities of 500, 1000, and 1500 m/s with the help of Johnson Cook material model. Mie-Gruneisen equation of state is used in conjunction with Johnson Cook material model to determine pressure-volume relationship at various points of interests. Two material models viz. Plastic-Kinematic and Johnson- Cook resulted in different deformation patterns in steel plate. It is observed from the simulation results that the velocity drop and loss of kinetic energy occurred very quickly up to perforation of plate, after that the change in velocity and changes in kinetic energy are negligibly small. The physics behind this kind of behaviour is presented in the paper.Keywords: AISI 4340 steel, ballistic impact simulation, bullet penetration, non-linear FEM
Procedia PDF Downloads 20810844 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling
Authors: Aamna Lawrence, Ashutosh Mishra
Abstract:
Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor
Procedia PDF Downloads 12810843 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction
Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan
Abstract:
Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.Keywords: decision trees, neural network, myocardial infarction, Data Mining
Procedia PDF Downloads 42910842 Determination of the Optimum Size of Building Stone Blocks: Case Study of Delichai Travertine Mine
Authors: Hesam Sedaghat Nejad, Navid Hosseini, Arash Nikvar Hassani
Abstract:
Determination of the optimum block size with high profitability is one of the significant parameters in designation of the building stone mines. The aim of this study was to determine the optimum dimensions of building stone blocks in Delichai travertine mine of Damavand in Tehran province through combining the effective parameters proven in determination of the optimum dimensions in building stones such as the spacing of joints and gaps, extraction tools constraints with the help of modeling by Gemcom software. To this end, following simulation of the topography of the mine, the block model was prepared and then in order to use spacing joints and discontinuities as a limiting factor, the existing joints set was added to the model. Since only one almost horizontal joint set with a slope of 5 degrees was available, this factor was effective only in determining the optimum height of the block, and thus to determine the longitudinal and transverse optimum dimensions of the extracted block, the power of available loader in the mine was considered as the secondary limiting factor. According to the aforementioned factors, the optimal block size in this mine was measured as 3.4×4×7 meter.Keywords: building stone, optimum block size, Delichay travertine mine, loader power
Procedia PDF Downloads 36510841 Measuring Output Multipliers of Energy Consumption and Manufacturing Sectors in Malaysia during the Global Financial Crisis
Authors: Hussain Ali Bekhet, Tuan Ab. Rashid Bin Tuan Abdullah, Tahira Yasmin
Abstract:
The strong relationship between energy consumption and economic growth is widely recognised. Most countries’ energy demand declined during the economic depression known as the Global Financial Crisis (GFC) of 2008–2009. The objective of the current study is to investigate the energy consumption and performance of Malaysia’s manufacturing sectors during the GFC. We applied the output multiplier approach, which is based on the input-output model. Two input-output tables of Malaysia covering 2005 and 2010 were used. The results indicate significant changes in the output multipliers of the manufacturing sectors between 2005 and 2010. Moreover, the energy-to-manufacturing sectors’ output multipliers also decreased during the GFC due to a decline in export-oriented industries during the crisis. The increasing importance of the manufacturing sector to the development of Malaysian trade resulted in a noticeable decrease in the consumption of each energy sector’s output, especially the electricity and gas sector. Based on the research findings, the Malaysian government released several policy implementations in the form of stimulus packages to enhance these sectors’ performance and generally improve the Malaysian economy.Keywords: global financial crisis, input-output model, manufacturing, output multipliers, energy, Malaysia
Procedia PDF Downloads 72610840 Short-Term Operation Planning for Energy Management of Exhibition Hall
Authors: Yooncheol Lee, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
This paper deals with the establishment of a short-term operational plan for an air conditioner for efficient energy management of exhibition hall. The short-term operational plan is composed of a time series of operational schedules, which we have searched using genetic algorithms. Establishing operational schedule should be considered the future trends of the variables affecting the exhibition hall environment. To reflect continuously changing factors such as external temperature and occupant, short-term operational plans should be updated in real time. But it takes too much time to evaluate a short-term operational plan using EnergyPlus, a building emulation tool. For that reason, it is difficult to update the operational plan in real time. To evaluate the short-term operational plan, we designed prediction models based on machine learning with fast evaluation speed. This model, which was created by learning the past operational data, is accurate and fast. The collection of operational data and the verification of operational plans were made using EnergyPlus. Experimental results show that the proposed method can save energy compared to the reactive control method.Keywords: exhibition hall, energy management, predictive model, simulation-based optimization
Procedia PDF Downloads 33910839 The Impact of Sustainable Packaging on Customers’ Willingness to Buy: A Study Based in Rwanda
Authors: Nirere Martine
Abstract:
Purpose –The purpose of this study aims to understand the intention of customers to adopt sustainable packaging and the impact of sustainable packaging on customers’ willingness to buy a product using sustainable packaging. Design/methodology/approach – A new research model based on the technology acceptance model (TAM) and structural equation modeling are used to examine causality and test relationship based on the data collected from 251 Rwanda samples. Findings – The findings indicated that perceived ease of use positively affects perceived usefulness. However, perceived usefulness and perceived ease of use positively affect the intention to adopt sustainable packaging. However, perceived risk and perceived cost negatively affect the intention to adopt sustainable packaging. The intention to adopt sustainable packaging positively affects the willingness to buy a product using sustainable packaging. Originality/value – Many researchers have investigated the issue of a consumers’ behavior to purchase a product. In particular, they have examined whether customers are willing to pay extra for a packaging product. There has been no study that has examined the impact of sustainable packaging on customers’ willingness to buy. The results of this study can help manufacturers form a better understanding of customers’ willingness to purchase a product using sustainable packaging.Keywords: consumers’ behavioral, sustainable packaging, TAM, Rwanda
Procedia PDF Downloads 19610838 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam
Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen
Abstract:
Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.Keywords: infectious disease, dengue, geospatial data, climate
Procedia PDF Downloads 38310837 Model of Learning Center on OTOP Production Process Based on Sufficiency Economic Philosophy
Authors: Chutikarn Sriviboon, Witthaya Mekhum
Abstract:
The purposes of this research were to analyze and evaluate successful factors in OTOP production process for the developing of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The research has been designed as a qualitative study to gather information from 30 OTOP producers in Bangkontee District, Samudsongkram Province. They were all interviewed on 3 main parts. Part 1 was about the production process including 1) production 2) product development 3) the community strength 4) marketing possibility and 5) product quality. Part 2 evaluated appropriate successful factors including 1) the analysis of the successful factors 2) evaluate the strategy based on Sufficiency Economic Philosophy and 3) the model of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The results showed that the production did not affect the environment with potential in continuing standard quality production. They used the raw materials in the country. On the aspect of product and community strength in the past 1 year, it was found that there was no appropriate packaging showing product identity according to global market standard. They needed the training on packaging especially for food and drink products. On the aspect of product quality and product specification, it was found that the products were certified by the local OTOP standard. There should be a responsible organization to help the uncertified producers pass the standard. However, there was a problem on food contamination which was hazardous to the consumers. The producers should cooperate with the government sector or educational institutes involving with food processing to reach FDA standard. The results from small group discussion showed that the community expected high education and better standard living. Some problems reported by the community included informal debt and drugs in the community. There were 8 steps in developing the model of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality.Keywords: production process, OTOP, sufficiency economic philosophy, learning center
Procedia PDF Downloads 37610836 Influence of Fermentation Conditions on Humic Acids Production by Trichoderma viride Using an Oil Palm Empty Fruit Bunch as the Substrate
Authors: F. L. Motta, M. H. A. Santana
Abstract:
Humic Acids (HA) were produced by a Trichoderma viride strain under submerged fermentation in a medium based on the oil palm Empty Fruit Bunch (EFB) and the main variables of the process were optimized by using response surface methodology. A temperature of 40°C and concentrations of 50g/L EFB, 5.7g/L potato peptone and 0.11g/L (NH4)2SO4 were the optimum levels of the variables that maximize the HA production, within the physicochemical and biological limits of the process. The optimized conditions led to an experimental HA concentration of 428.4±17.5 mg/L, which validated the prediction from the statistical model of 412.0mg/L. This optimization increased about 7–fold the HA production previously reported in the literature. Additionally, the time profiles of HA production and fungal growth confirmed our previous findings that HA production preferably occurs during fungal sporulation. The present study demonstrated that T. viride successfully produced HA via the submerged fermentation of EFB and the process parameters were successfully optimized using a statistics-based response surface model. To the best of our knowledge, the present work is the first report on the optimization of HA production from EFB by a biotechnological process, whose feasibility was only pointed out in previous works.Keywords: empty fruit bunch, humic acids, submerged fermentation, Trichoderma viride
Procedia PDF Downloads 30610835 A Conv-Long Short-term Memory Deep Learning Model for Traffic Flow Prediction
Authors: Ali Reza Sattarzadeh, Ronny J. Kutadinata, Pubudu N. Pathirana, Van Thanh Huynh
Abstract:
Traffic congestion has become a severe worldwide problem, affecting everyday life, fuel consumption, time, and air pollution. The primary causes of these issues are inadequate transportation infrastructure, poor traffic signal management, and rising population. Traffic flow forecasting is one of the essential and effective methods in urban congestion and traffic management, which has attracted the attention of researchers. With the development of technology, undeniable progress has been achieved in existing methods. However, there is a possibility of improvement in the extraction of temporal and spatial features to determine the importance of traffic flow sequences and extraction features. In the proposed model, we implement the convolutional neural network (CNN) and long short-term memory (LSTM) deep learning models for mining nonlinear correlations and their effectiveness in increasing the accuracy of traffic flow prediction in the real dataset. According to the experiments, the results indicate that implementing Conv-LSTM networks increases the productivity and accuracy of deep learning models for traffic flow prediction.Keywords: deep learning algorithms, intelligent transportation systems, spatiotemporal features, traffic flow prediction
Procedia PDF Downloads 17110834 The Effects of Separating Inferior Alveolar Neurovascular Bundles on Osteogenesis of Tissue-Engineered Bone and Vascularization
Authors: Lin Feng, E. Lingling, Hongchen Liu
Abstract:
In order to evaluate the effects of autologous blood vessels and nerves on vascularization. A dog model of tissue-engineered bone vascularization was established by constructing inferior alveolar neurovascular bundles through the mandibular canal. Sixteen 12-month-old healthy beagles were randomly divided into two groups (n=8). Group A retained inferior alveolar neurovascular bundles, and Group B retained inferior alveolar nerves. Bone marrow mesenchymal stem cells were injected into β-tricalcium phosphate to prepare internal tissue-engineered bone scaffold. A personalized titanium mesh was then prepared by rapid prototyping and fixed by external titanium scaffold. Two dogs in each group were sacrificed on the 30th, 45th, 60th, and 90th postoperative days respectively. The bone was visually examined, scanned by CT, and subjected to HE staining, immunohistochemical staining, vascular casting and PCR to detect the changes in osteogenesis and vascularization.The two groups had similar outcomes in regard to osteogenesis and vascularization (P>0.05) both showed remarkable regenerative capacities. The model of tissue-engineered bone vascularization is potentially applicable in clinical practice to allow satisfactory osteogenesis and vascularization.Keywords: inferior alveolar neurovascular bundle, osteogenesis, tissue-engineered bone, vascularization
Procedia PDF Downloads 39010833 Horizontal and Vertical Illuminance Correlations in a Case Study for Shaded South Facing Surfaces
Authors: S. Matour, M. Mahdavinejad, R. Fayaz
Abstract:
Daylight utilization is a key factor in achieving visual and thermal comfort, and energy savings in integrated building design. However, lack of measured data related to this topic has become a major challenge with the increasing need for integrating lighting concepts and simulations in the early stages of design procedures. The current paper deals with the values of daylight illuminance on horizontal and south facing vertical surfaces; the data are estimated using IESNA model and measured values of the horizontal and vertical illuminance, and a regression model with an acceptable linear correlation is obtained. The resultant illuminance frequency curves are useful for estimating daylight availability on south facing surfaces in Tehran. In addition, the relationship between indirect vertical illuminance and the corresponding global horizontal illuminance is analyzed. A simple parametric equation is proposed in order to predict the vertical illumination on a shaded south facing surface. The equation correlates the ratio between the vertical and horizontal illuminance to the solar altitude and is used with another relationship for prediction of the vertical illuminance. Both equations show good agreement, which allows for calculation of indirect vertical illuminance on a south facing surface at any time throughout the year.Keywords: Tehran daylight availability, horizontal illuminance, vertical illuminance, diffuse illuminance
Procedia PDF Downloads 20510832 An Evidence Map of Cost-Utility Studies in Non-Small Cell Lung Cancer
Authors: Cassandra Springate, Alexandra Furber, Jack E. Hines
Abstract:
Objectives: To create an evidence map of the cost-utility studies available with non-small cell lung cancer patients, and identify the geographical settings and interventions used. Methods: Using the Disease, Study Type, and Model Type filters in heoro.com we identified all cost-utility studies published between 1960 and 2017 with patients with non-small cell lung cancer. These papers were then indexed according to pre-specified categories. Results: Heoro.com identified 89 independent publications, published between 1995 and 2017. Of the 89 papers, 74 were published since 2010, 28 were from the USA, and 35 were from Europe, 16 of which were from the UK. Other publications were from China and Japan (13), Canada (9), Australia and New Zealand (4), and other countries (8). Fifty-nine studies included a chemotherapy intervention, of which 23 included erlotinib or gefitinib, 21 included pemetrexed or docetaxel, others included nivolumab (3), pembrolizumab (2), crizotinib (2), denosumab (2), necitumumab (1), and bevacizumab (1). Also, 19 studies modeled screening, staging, or surveillance strategies. Conclusions: The cost-utility studies found for NSCLC most commonly looked at the effectiveness of different chemotherapy treatments, with some also evaluating the addition of screening strategies. Most were also conducted with patient data from the USA and Europe.Keywords: cancer, cost-utility, economic model, non-small cell lung cancer
Procedia PDF Downloads 15010831 Modeling of Carbon Monoxide Distribution under the Sky-Train Stations
Authors: Suranath Chomcheon, Nathnarong Khajohnsaksumeth, Benchawan Wiwatanapataphee
Abstract:
Carbon monoxide is one of the harmful gases which have colorless, odorless, and tasteless. Too much carbon monoxide taken into the human body causes the reduction of oxygen transportation within human body cells leading to many symptoms including headache, nausea, vomiting, loss of consciousness, and death. Carbon monoxide is considered as one of the air pollution indicators. It is mainly released as soot from the exhaust pipe of the incomplete combustion of the vehicle engine. Nowadays, the increase in vehicle usage and the slowly moving of the vehicle struck by the traffic jam has created a large amount of carbon monoxide, which accumulated in the street canyon area. In this research, we study the effect of parameters such as wind speed and aspect ratio of the height building affecting the ventilation. We consider the model of the pollutant under the Bangkok Transit System (BTS) stations in a two-dimensional geometrical domain. The convention-diffusion equation and Reynolds-averaged Navier-stokes equation is used to describe the concentration and the turbulent flow of carbon monoxide. The finite element method is applied to obtain the numerical result. The result shows that our model can describe the dispersion patterns of carbon monoxide for different wind speeds.Keywords: air pollution, carbon monoxide, finite element, street canyon
Procedia PDF Downloads 12610830 Efficiency of Background Chlorine Residuals against Accidental Microbial Episode in Proto-Type Distribution Network (Rig) Using Central Composite Design (CCD)
Authors: Sajida Rasheed, Imran Hashmi, Luiza Campos, Qizhi Zhou, Kim Keu
Abstract:
A quadratic model (p ˂ 0.0001) was developed by using central composite design of 50 experimental runs (42 non-center + 8 center points) to assess efficiency of background chlorine residuals in combating accidental microbial episode in a prototype distribution network (DN) (rig). A known amount of background chlorine residuals were maintained in DN and a required number of bacteria, Escherichia coli K-12 strain were introduced by an injection port in the pipe loop system. Samples were taken at various time intervals at different pipe lengths. Spread plate count was performed to count bacterial number. The model developed was significant. With microbial concentration and time (p ˂ 0.0001), pipe length (p ˂ 0.022), background chlorine residuals (p ˂ 0.07) and time^2 (p ˂ 0.09) as significant factors. The ramp function of variables shows that at the microbial count of 10^6, at 0.76 L/min, and pipe length of 133 meters, a background residual chlorine 0.16 mg/L was enough for complete inactivation of microbial episode in approximately 18 minutes.Keywords: central composite design (CCD), distribution network, Escherichia coli, residual chlorine
Procedia PDF Downloads 46210829 Analytical Modelling of Surface Roughness during Compacted Graphite Iron Milling Using Ceramic Inserts
Authors: Ş. Karabulut, A. Güllü, A. Güldaş, R. Gürbüz
Abstract:
This study investigates the effects of the lead angle and chip thickness variation on surface roughness during the machining of compacted graphite iron using ceramic cutting tools under dry cutting conditions. Analytical models were developed for predicting the surface roughness values of the specimens after the face milling process. Experimental data was collected and imported to the artificial neural network model. A multilayer perceptron model was used with the back propagation algorithm employing the input parameters of lead angle, cutting speed and feed rate in connection with chip thickness. Furthermore, analysis of variance was employed to determine the effects of the cutting parameters on surface roughness. Artificial neural network and regression analysis were used to predict surface roughness. The values thus predicted were compared with the collected experimental data, and the corresponding percentage error was computed. Analysis results revealed that the lead angle is the dominant factor affecting surface roughness. Experimental results indicated an improvement in the surface roughness value with decreasing lead angle value from 88° to 45°.Keywords: CGI, milling, surface roughness, ANN, regression, modeling, analysis
Procedia PDF Downloads 448