Search results for: distributed type-2 fuzzy algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5919

Search results for: distributed type-2 fuzzy algorithm

3279 GSM Based Smart Patient Monitoring System

Authors: Ayman M. Mansour

Abstract:

In this paper, we propose an intelligent system that is used for monitoring the health conditions of Patients. Monitoring the health condition of Patients is a complex problem that involves different medical units and requires continuous monitoring especially in rural areas because of inadequate number of available specialized physicians. The proposed system will Improve patient care and drive costs down comparing to the existing system in Jordan. The proposed system will be the start point to Faster and improve the communication between different units in the health system in Jordan. Connecting patients and their physicians beyond hospital doors regarding their geographical area is an important issue in developing the health system in Jordan. The propose system will provide an intelligent system that will generate initial diagnosing to the patient case. This will assist and advice clinicians at the point of care. The decision is based on demographic data and laboratory test results of patient data. Using such system with the ability of making medical decisions, the quality of medical care in Jordan and specifically in Tafial is expected to be improved. This will provide more accurate, effective, and reliable diagnoses and treatments especially if the physicians have insufficient knowledge.

Keywords: GSM, SMS, patient, monitoring system, fuzzy logic, multi-agent system

Procedia PDF Downloads 558
3278 Fabrication of Immune-Affinity Monolithic Array for Detection of α-Fetoprotein and Carcinoembryonic Antigen

Authors: Li Li, Li-Ru Xia, He-Ye Wang, Xiao-Dong Bi

Abstract:

In this paper, we presented a highly sensitive immune-affinity monolithic array for detection of α-fetoprotein (AFP) and carcinoembryonic antigen (CEA). Firstly, the epoxy functionalized monolith arrays were fabricated using UV initiated copolymerization method. Scanning electron microscopy (SEM) image showed that the poly(BABEA-co-GMA) monolith exhibited a well-controlled skeletal and well-distributed porous structure. Then, AFP and CEA immune-affinity monolithic arrays were prepared by immobilization of AFP and CEA antibodies on epoxy functionalized monolith arrays. With a non-competitive immune response format, the presented AFP and CEA immune-affinity arrays were demonstrated as an inexpensive, flexible, homogeneous and stable array for detection of AFP and CEA.

Keywords: chemiluminescent detection, immune-affinity, monolithic copolymer array, UV-initiated copolymerization

Procedia PDF Downloads 331
3277 Using Nonhomogeneous Poisson Process with Compound Distribution to Price Catastrophe Options

Authors: Rong-Tsorng Wang

Abstract:

In this paper, we derive a pricing formula for catastrophe equity put options (or CatEPut) with non-homogeneous loss and approximated compound distributions. We assume that the loss claims arrival process is a nonhomogeneous Poisson process (NHPP) representing the clustering occurrences of loss claims, the size of loss claims is a sequence of independent and identically distributed random variables, and the accumulated loss distribution forms a compound distribution and is approximated by a heavy-tailed distribution. A numerical example is given to calibrate parameters, and we discuss how the value of CatEPut is affected by the changes of parameters in the pricing model we provided.

Keywords: catastrophe equity put options, compound distributions, nonhomogeneous Poisson process, pricing model

Procedia PDF Downloads 163
3276 Ethereum Based Smart Contracts for Trade and Finance

Authors: Rishabh Garg

Abstract:

Traditionally, business parties build trust with a centralized operating mechanism, such as payment by letter of credit. However, the increase in cyber-attacks and malicious hacking has jeopardized business operations and finance practices. Emerging markets, owing to their higher banking risks and bigger presence of digital financing, are looking forward to technology-driven solutions, financial inclusion and innovative working paradigms. Blockchain has the potential to enhance transaction transparency and supply chain traceability. It has captured a vast landscape with 200 million crypto users worldwide. Fintech and blockchain products are popping up across brokerage, digital wallets, exchanges, post-trade clearance, settlement, middleware, infrastructure, and base protocols.

Keywords: blockchain, distributed ledger technology, decentralized applications, ethereum, smart contracts, trade finance

Procedia PDF Downloads 145
3275 Inclusive Cities Decision Matrix Based on a Multidimensional Approach for Sustainable Smart Cities

Authors: Madhurima S. Waghmare, Shaleen Singhal

Abstract:

The concept of smartness, inclusion, sustainability is multidisciplinary and fuzzy, rooted in economic and social development theories and policies which get reflected in the spatial development of the cities. It is a challenge to convert these concepts from aspirations to transforming actions. There is a dearth of assessment and planning tools to support the city planners and administrators in developing smart, inclusive, and sustainable cities. To address this gap, this study develops an inclusive cities decision matrix based on an exploratory approach and using mixed methods. The matrix is soundly based on a review of multidisciplinary urban sector literature and refined and finalized based on inputs from experts and insights from case studies. The application of the decision matric on the case study cities in India suggests that the contemporary planning tools for cities need to be multidisciplinary and flexible to respond to the unique needs of the diverse contexts. The paper suggests that a multidimensional and inclusive approach to city planning can play an important role in building sustainable smart cities.

Keywords: inclusive-cities decision matrix, smart cities in India, city planning tools, sustainable cities

Procedia PDF Downloads 152
3274 A Set of Microsatellite Markers for Population Genetics of Copper-Winged Bat (Myotis rufoniger) Using Saliva DNA

Authors: Junghwa An, Sungkyoung Choi, Eun Ye, San Hoon Han, Young-Gun Choi, Chul Oun Jung

Abstract:

The copper-winged bat (Myotis rufoniger) is the widely distributed medium body-sized bat in Asia, including Korea. This bat population has been decreasing because of habitat loss. This study reported the isolation and characterization of ten polymorphic microsatellite loci in endangered M. rufoniger. To do genetic studies, we use saliva DNA of bats during winter sleep period. The number of alleles per locus ranged from 2 to 9, and the observed and expected heterozygosities ranged from 0.063 to 0.750 and from 0.063 to 0.865, respectively. The average polymorphic information content (PIC) value of these markers was 0.37. Two loci of M. rufoniger showed departure from Hardy-Weinberg equilibrium(HWE). This demonstrated that the ten microsatellite loci can be used as genetic markers for further investigation of the copper-winged bat.

Keywords: copper-winged bat, microsatellite, population genetics, South Korea

Procedia PDF Downloads 367
3273 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar

Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo

Abstract:

The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.

Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB

Procedia PDF Downloads 84
3272 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis

Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan

Abstract:

Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.

Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis

Procedia PDF Downloads 81
3271 Performance Analysis of Permanent Magnet Synchronous Motor Using Direct Torque Control Based ANFIS Controller for Electric Vehicle

Authors: Marulasiddappa H. B., Pushparajesh Viswanathan

Abstract:

Day by day, the uses of internal combustion engines (ICE) are deteriorating because of pollution and less fuel availability. In the present scenario, the electric vehicle (EV) plays a major role in the place of an ICE vehicle. The performance of EVs can be improved by the proper selection of electric motors. Initially, EV preferred induction motors for traction purposes, but due to complexity in controlling induction motor, permanent magnet synchronous motor (PMSM) is replacing induction motor in EV due to its advantages. Direct torque control (DTC) is one of the known techniques for PMSM drive in EV to control the torque and speed. However, the presence of torque ripple is the main drawback of this technique. Many control strategies are followed to reduce the torque ripples in PMSM. In this paper, the adaptive neuro-fuzzy inference system (ANFIS) controller technique is proposed to reduce torque ripples and settling time. Here the performance parameters like torque, speed and settling time are compared between conventional proportional-integral (PI) controller with ANFIS controller.

Keywords: direct torque control, electric vehicle, torque ripple, PMSM

Procedia PDF Downloads 158
3270 EFL Learners’ Perceptions in Using Online Tools in Developing Writing Skills

Authors: Zhikal Qadir Salih, Hanife Bensen

Abstract:

As the advent of modern technology continues to make towering impacts on everything, its relevance permeates to all spheres, language learning, and writing skills in particular not an exception. This study aimed at finding out how EFL learners perceive online tools to improve their writing skills. The study was carried out at Tishk University. Copies of the questionnaire were distributed to the participants, in order to elicit their perceptions. The collected data were subjected to descriptive and inferential statistics. The outcome revealed that the participants have positive perceptions about online tools in using them to enhance their writing skills. The study however found out that both gender and the class level of the participants do not make any significant difference in their perceptions about the use of online tools, as far as writing skill is concerned. Based on these outcomes, relevant recommendations were made.

Keywords: online tools, writing skills, EFL learners, language learning

Procedia PDF Downloads 98
3269 Software Defined Storage: Object Storage over Hadoop Platform

Authors: Amritesh Srivastava, Gaurav Sharma

Abstract:

The purpose of this project is to develop an open source object storage system that is highly durable, scalable and reliable. There are two representative systems in cloud computing: Google and Amazon. Their storage systems for Google GFS and Amazon S3 provide high reliability, performance and stability. Our proposed system is highly inspired from Amazon S3. We are using Hadoop Distributed File System (HDFS) Java API to implement our system. We propose the architecture of object storage system based on Hadoop. We discuss the requirements of our system, what we expect from our system and what problems we may encounter. We also give detailed design proposal along with the abstract source code to implement it. The final goal of the system is to provide REST based access to our object storage system that exists on top of HDFS.

Keywords: Hadoop, HBase, object storage, REST

Procedia PDF Downloads 333
3268 The Role of Self-Confidence, Adversity Quotient, and Self-Efficacy Critical Thinking: Path Model

Authors: Bayu Dwi Cahyo, Ekohariadi, Theodorus Wiyanto Wibowo, I. G. P. Asto Budithahjanto, Eppy Yundra

Abstract:

The objective of this study is to examine the effects of self-confidence, adversity quotient, and self-efficacy variables on critical thinking. This research's participants are 137 cadets of Aviation Polytechnics of Surabaya with the sampling technique that was purposive sampling. In this study, the data collection method used a questionnaire with Linkert-scale and distributed or given to respondents by the specified number of samples. The SPSS AMOS v23 was used to test a number of a priori multivariate growth curve models and examining relationships between the variables via path analysis. The result of path analysis was (χ² = 88.463, df= 71, χ² /df= 1.246, GFI= .914, CFI= .988, P= .079, AGFI= .873, TLI= .985, RMSEA= .043). According to the analysis, there is a positive and significant relationship between self-confidence, adversity quotient, and self-efficacy variables on critical thinking.

Keywords: self-confidence, adversity quotient, self-efficacy variables, critical thinking

Procedia PDF Downloads 141
3267 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: clustering coefficient, preferential attachment, small world, web community

Procedia PDF Downloads 269
3266 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 137
3265 Ground Water Monitoring Using High-Resolution Fiber Optics Cable Sensors (FOCS)

Authors: Sayed Isahaq Hossain, K. T. Chang, Moustapha Ndour

Abstract:

Inference of the phreatic line through earth dams is of paramount importance because it could be directly associated with piping phenomena which may lead to the dam failure. Normally in the field, the instrumentations such as ‘diver’ and ‘standpipe’ are to be used to identify the seepage conditions which only provide point data with a fair amount of interpolation or assumption. Here in this paper, we employed high-resolution fiber optic cable sensors (FOCS) based on Raman Scattering in order to obtain a very accurate phreatic line and seepage profile. Unlike the above-mention devices which pinpoint the water level location, this kind of Distributed Fiber Optics Sensing gives us more reliable information due to its inherent characteristics of continuous measurement.

Keywords: standpipe, diver, FOCS, monitoring, Raman scattering

Procedia PDF Downloads 347
3264 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 123
3263 Green Synthesis of Silver Nanoparticles Using Echinacea Flower Extract and Characterization

Authors: Masood Hussain, Erol Pehlivan, Ahmet Avci, Ecem Guder

Abstract:

Green synthesis of silver nanoparticles (AgNPs) was carried out by using echinacea flower extract as reducing/protecting agent. The effects of various operating parameters and additives on the dimensions such as stirring rate, temperature, pH of the solution, the amount of extract and concentration of silver nitrate were optimized in order to achieve monodispersed spherical and small size echinacea protected silver nanoparticles (echinacea-AgNPs) through biosynthetic method. The surface roughness and topography of synthesized metal nanoparticles were confirmed by using Atomic Force Microscopy (AFM). High-Resolution Transmission Electron Microscopic (HRTEM) results elaborated the formation of uniformly distributed Echinacea protected AgNPs (Echinacea-AgNPs) having an average size of 30.2±2nm.

Keywords: Echinacea flower extract, green synthesis, silver nanoparticles, morphology

Procedia PDF Downloads 416
3262 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 73
3261 Analysis of Simply Supported Beams Using Elastic Beam Theory

Authors: M. K. Dce

Abstract:

The aim of this paper is to investigate the behavior of simply supported beams having rectangular section and subjected to uniformly distributed load (UDL). In this study five beams of span 5m, 6m, 7m and 8m have been considered. The width of all the beams is 400 mm and span to depth ratio has been taken as 12. The superimposed live load has been increased from 10 kN/m to 25 kN/m at the interval of 5 kN/m. The analysis of the beams has been carried out using the elastic beam theory. On the basis of present study it has been concluded that the maximum bending moment as well as deflection occurs at the mid-span of simply supported beam and its magnitude increases in proportion to magnitude of UDL. Moreover, the study suggests that the maximum moment is proportional to square of span and maximum deflection is proportional to fourth power of span.

Keywords: beam, UDL, bending moment, deflection, elastic beam theory

Procedia PDF Downloads 385
3260 An Evaluative Approach for Successful Implementation of Lean and Green Manufacturing in Indian SMEs

Authors: Satya S. N. Narayana, P. Parthiban, T. Niranjan, N. Kannan

Abstract:

Enterprises adopt methodologies to increase their business performance and to stay competent in the volatile global market. Lean manufacturing is one such manufacturing paradigm which focuses on reduction of cost by elimination of wastes or non-value added activities. With increased awareness about social responsibility and the necessary to meet the terms of the environmental policy, green manufacturing is becoming increasingly important for industries. Large plants have more resources, have started implementing lean and green practices and they are getting good results. Small and medium scale enterprises (SMEs) are facing problems in implementing lean and green concept. This paper aims to identify the key issues for implementation of lean and green concept in Indian SMEs. The key factors identified based on literature review and expert opinions are grouped into different levels by Modified Interpretive Structural Modeling (MISM) to explore the importance among the factors to implement lean and green manufacturing. Finally, Fuzzy Analytic Network Process (FANP) method has been used to determine the extent to which the main principles of lean and green manufacturing have been carried out in the six Indian medium scale manufacturing industries.

Keywords: lean manufacturing, green manufacturing, MISM, FANP

Procedia PDF Downloads 532
3259 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training

Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto

Abstract:

In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.

Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks

Procedia PDF Downloads 90
3258 Uncovering the Relationship between EFL Students' Self-Concept and Their Willingness to Communicate in Language Classes

Authors: Seyedeh Khadijeh Amirian, Seyed Mohammad Reza Amirian, Narges Hekmati

Abstract:

The current study aims at examining the relationship between English as a foreign language (EFL) students' self-concept and their willingness to communicate (WTC) in EFL classes. To this effect, two questionnaires, namely 'Willingness to Communicate' (MacIntyre et al., 2001) and 'Self-Concept Scale' (Liu and Wang, 2005), were distributed among 174 (45 males and 129 females) Iranian EFL university students. Correlation and regression analyses were conducted to examine the relationship between the two variables. The results indicated that there was a significantly positive correlation between EFL students' self-concept and their WTC in EFL classes (p < .0.05). Moreover, regression analyses indicated that self-concept has a significantly positive influence on students’ WTC in language classes (B= .302, p < .0.05) and explains .302 percent of the variance in the dependent variable (WTC). The results are discussed with regards to the individual differences in educational contexts, and implications are offered.

Keywords: EFL students, language classes, willingness to communicate, self-concept

Procedia PDF Downloads 119
3257 Zoonotic Dirofilaria Repens: Geographic Spread and New Avenues for Control

Authors: Francesco La Torre, Angela Di Cesare, Donato Traversa

Abstract:

The mosquito-transmitted nematode Dirofilaria repens is the causative agent of subcutaneous filariosis in dogs, other animals and humans. Adults and circulating microfilariae may cause different forms of skin conditions, and various allergic reactions. The infection is distributed in several countries and spreading in several areas of Europe. The control of D. repens is pivotal to reduce the transmission in dogs and to minimize the risk of infection in humans, but only few information is available for the chemoprevention of subcutaneous filariosis of dogs. A recent clinical field study showed the efficacy and safety of a monthly administration of an oral formulation containing milbemycin oxime (Milbemax®, Novartis Animal Health) in the chemoprevention of D. repens infection in dogs. Most recent and focused insights into epidemiology and control of zoonotic canine subcutaneous filariosis are here discussed.

Keywords: Dirofilaria repens, epidemiology, zoonosis, control

Procedia PDF Downloads 736
3256 Fuzzy Data, Random Drift, and a Theoretical Model for the Sequential Emergence of Religious Capacity in Genus Homo

Authors: Margaret Boone Rappaport, Christopher J. Corbally

Abstract:

The ancient ape ancestral population from which living great ape and human species evolved had demographic features affecting their evolution. The population was large, had great genetic variability, and natural selection was effective at honing adaptations. The emerging populations of chimpanzees and humans were affected more by founder effects and genetic drift because they were smaller. Natural selection did not disappear, but it was not as strong. Consequences of the 'population crash' and the human effective population size are introduced briefly. The history of the ancient apes is written in the genomes of living humans and great apes. The expansion of the brain began before the human line emerged. Coalescence times for some genes are very old – up to several million years, long before Homo sapiens. The mismatch between gene trees and species trees highlights the anthropoid speciation processes, and gives the human genome history a fuzzy, probabilistic quality. However, it suggests traits that might form a foundation for capacities emerging later. A theoretical model is presented in which the genomes of early ape populations provide the substructure for the emergence of religious capacity later on the human line. The model does not search for religion, but its foundations. It suggests a course by which an evolutionary line that began with prosimians eventually produced a human species with biologically based religious capacity. The model of the sequential emergence of religious capacity relies on cognitive science, neuroscience, paleoneurology, primate field studies, cognitive archaeology, genomics, and population genetics. And, it emphasizes five trait types: (1) Documented, positive selection of sensory capabilities on the human line may have favored survival, but also eventually enriched human religious experience. (2) The bonobo model suggests a possible down-regulation of aggression and increase in tolerance while feeding, as well as paedomorphism – but, in a human species that remains cognitively sharp (unlike the bonobo). The two species emerged from the same ancient ape population, so it is logical to search for shared traits. (3) An up-regulation of emotional sensitivity and compassion seems to have occurred on the human line. This finds support in modern genetic studies. (4) The authors’ published model of morality's emergence in Homo erectus encompasses a cognitively based, decision-making capacity that was hypothetically overtaken, in part, by religious capacity. Together, they produced a strong, variable, biocultural capability to support human sociability. (5) The full flowering of human religious capacity came with the parietal expansion and smaller face (klinorhynchy) found only in Homo sapiens. Details from paleoneurology suggest the stage was set for human theologies. Larger parietal lobes allowed humans to imagine inner spaces, processes, and beings, and, with the frontal lobe, led to the first theologies composed of structured and integrated theories of the relationships between humans and the supernatural. The model leads to the evolution of a small population of African hominins that was ready to emerge with religious capacity when the species Homo sapiens evolved two hundred thousand years ago. By 50-60,000 years ago, when human ancestors left Africa, they were fully enabled.

Keywords: genetic drift, genomics, parietal expansion, religious capacity

Procedia PDF Downloads 334
3255 Identifying the Factors affecting on the Success of Energy Usage Saving in Municipality of Tehran

Authors: Rojin Bana Derakhshan, Abbas Toloie

Abstract:

For the purpose of optimizing and developing energy efficiency in building, it is required to recognize key elements of success in optimization of energy consumption before performing any actions. Surveying Principal Components is one of the most valuable result of Linear Algebra because the simple and non-parametric methods are become confusing. So that energy management system implemented according to energy management system international standard ISO50001:2011 and all energy parameters in building to be measured through performing energy auditing. In this essay by simulating used of data mining, the key impressive elements on energy saving in buildings to be determined. This approach is based on data mining statistical techniques using feature selection method and fuzzy logic and convert data from massive to compressed type and used to increase the selected feature. On the other side, influence portion and amount of each energy consumption elements in energy dissipation in percent are recognized as separated norm while using obtained results from energy auditing and after measurement of all energy consuming parameters and identified variables. Accordingly, energy saving solution divided into 3 categories, low, medium and high expense solutions.

Keywords: energy saving, key elements of success, optimization of energy consumption, data mining

Procedia PDF Downloads 463
3254 Modeling and Simulation of the Tripod Gait of a Hexapod Robot

Authors: El Hansali Hasnaa, Bennani Mohammed

Abstract:

Hexapod legged robot’s missions, particularly in irregular and dangerous areas, require high stability and high precision. In this paper, we consider the rectangular architecture body of legged robots with six legs distributed symmetrically along two sides, each leg contains three degrees of freedom for greater mobility. The aim of this work is planning tripod gait trajectory, based on the computing of the kinematic model to determine the joint variables in the lifting and the propelling phases. For this, appropriate coordinate frames are attached to the body and legs in order to obtain clear representation and efficient generation of the system equations. A simulation in MATLAB software platform is developed to confirm the kinematic model and various trajectories to the tripod gait adopted by the hexapod robot in its locomotion.

Keywords: hexapod legged robot, inverse kinematic model, simulation in MATLAB, tripod gait

Procedia PDF Downloads 273
3253 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 175
3252 Firm's Growth Leading Dimensions of Blockchain Empowered Information Management System: An Empirical Study

Authors: Umang Varshney, Amit Karamchandani, Rohit Kapoor

Abstract:

Practitioners and researchers have realized that Blockchain is not limited to currency. Blockchain as a distributed ledger can ensure a transparent and traceable supply chain. Due to Blockchain-enabled IoTs, a firm’s information management system can now take inputs from other supply chain partners in real-time. This study aims to provide empirical evidence of dimensions responsible for blockchain implemented firm’s growth and highlight how sector (manufacturing or service), state's regulatory environment, and choice of blockchain network affect the blockchain's usefulness. This post-adoption study seeks to validate the findings of pre-adoption studies done on the blockchain. Data will be collected through a survey of managers working in blockchain implemented firms and analyzed through PLS-SEM.

Keywords: blockchain, information management system, PLS-SEM, firm's growth

Procedia PDF Downloads 112
3251 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 296
3250 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach

Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao

Abstract:

Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.

Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search

Procedia PDF Downloads 71