Search results for: fuzzy entropy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1018

Search results for: fuzzy entropy

298 Removal of Tartrazine Dye Form Aqueous Solutions by Adsorption on the Surface of Polyaniline/Iron Oxide Composite

Authors: Salem Ali Jebreil

Abstract:

In this work, a polyaniline/Iron oxide (PANI/Fe2O3) composite was chemically prepared by oxidative polymerization of aniline in acid medium, in presence of ammonium persulphate as an oxidant and amount of Fe2O3. The composite was characterized by a scanning electron microscopy (SEM). The prepared composite has been used as adsorbent to remove Tartrazine dye form aqueous solutions. The effects of initial dye concentration and temperature on the adsorption capacity of PANI/Fe2O3 for Tartrazine dye have been studied in this paper. The Langmuir and Freundlich adsorption models have been used for the mathematical description of adsorption equilibrium data. The best fit is obtained using the Freundlich isotherm with an R2 value of 0.998. The change of Gibbs energy, enthalpy, and entropy of adsorption has been also evaluated for the adsorption of Tartrazine onto PANI/ Fe2O3. It has been proved according the results that the adsorption process is endothermic in nature.

Keywords: adsorption, composite, dye, polyaniline, tartrazine

Procedia PDF Downloads 255
297 Exploring Management of the Fuzzy Front End of Innovation in a Product Driven Startup Company

Authors: Dmitry K. Shaytan, Georgy D. Laptev

Abstract:

In our research we aimed to test a managerial approach for the fuzzy front end (FFE) of innovation by creating controlled experiment/ business case in a breakthrough innovation development. The experiment was in the sport industry and covered all aspects of the customer discovery stage from ideation to prototyping followed by patent application. In the paper we describe and analyze mile stones, tasks, management challenges, decisions made to create the break through innovation, evaluate overall managerial efficiency that was at the considered FFE stage. We set managerial outcome of the FFE stage as a valid product concept in hand. In our paper we introduce hypothetical construct “Q-factor” that helps us in the experiment to distinguish quality of FFE outcomes. The experiment simulated for entrepreneur the FFE of innovation and put on his shoulders responsibility for the outcome of valid product concept. While developing managerial approach to reach the outcome there was a decision to look on product concept from the cognitive psychology and cognitive science point of view. This view helped us to develop the profile of a person whose projection (mental representation) of a new product could optimize for a manager or entrepreneur FFE activities. In the experiment this profile was tested to develop breakthrough innovation for swimmers. Following the managerial approach the product concept was created to help swimmers to feel/sense water. The working prototype was developed to estimate the product concept validity and value added effect for customers. Based on feedback from coachers and swimmers there were strong positive effect that gave high value for customers, and for the experiment – the valid product concept being developed by proposed managerial approach for the FFE. In conclusions there is a suggestion of managerial approach that was derived from experiment.

Keywords: concept development, concept testing, customer discovery, entrepreneurship, entrepreneurial management, idea generation, idea screening, startup management

Procedia PDF Downloads 416
296 Defuzzification of Periodic Membership Function on Circular Coordinates

Authors: Takashi Mitsuishi, Koji Saigusa

Abstract:

This paper presents circular polar coordinates transformation of periodic fuzzy membership function. The purpose is identification of domain of periodic membership functions in consequent part of IF-THEN rules. The proposed methods are applied to the simple color construct system.

Keywords: periodic membership function, polar coordinates transformation, defuzzification, circular coordinates

Procedia PDF Downloads 282
295 Facial Recognition on the Basis of Facial Fragments

Authors: Tetyana Baydyk, Ernst Kussul, Sandra Bonilla Meza

Abstract:

There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.

Keywords: face recognition, labeled faces in the wild (LFW) database, random local descriptor (RLD), random features

Procedia PDF Downloads 331
294 Urban Security and Social Sustainability in Cities of Developing Countries

Authors: Taimaz Larimian, Negin Sadeghi

Abstract:

Very little is known about the impacts of urban security on the level of social sustainability within the cities of developing countries. Urban security is still struggling to find its position in the social sustainability agenda, despite the significant role of safety and security on different aspects of peoples’ lives. This paper argues that urban safety and security should be better integrated within the social sustainability framework. With this aim, this study investigates the hypothesized relationship between social sustainability and Crime Prevention through Environmental Design (CPTED) approach at the neighborhood scale. This study proposes a model of key influential dimensions of CPTED analyzed into localized factors and sub-factors. These factors are then prioritized using pairwise comparison logic and fuzzy group Analytic Hierarchy Process (AHP) method in order to determine the relative importance of each factor on achieving social sustainability. The proposed model then investigates social sustainability in six case study neighborhoods of Isfahan city based on residents’ perceptions of safety within their neighborhood. Mixed method of data collection is used by using a self-administered questionnaire to explore the residents’ perceptions of social sustainability in their area of residency followed by an on-site observation to measure the CPTED construct. In all, 150 respondents from selected neighborhoods were involved in this research. The model indicates that CPTED approach has a significant direct influence on increasing social sustainability in neighborhood scale. According to the findings, among different dimensions of CPTED, ‘activity support’ and ‘image/ management’ have the most influence on people’s feeling of safety within studied areas. This model represents a useful designing tool in achieving urban safety and security during the development of more socially sustainable and user-friendly urban areas.

Keywords: crime prevention through environmental design (CPTED), developing countries, fuzzy analytic hierarchy process (FAHP), social sustainability

Procedia PDF Downloads 279
293 Magnetic, Magnetocaloric, and Electrical Properties of Pr0.7Ca0.3Mn0.9M0.1O3

Authors: A. Selmi, A. Bettaibi, H. Rahmouni, R. M’nassri, N. Chniba Boudjada, A. Chiekhrouhou, K. Khirouni

Abstract:

Investigation of magnetic and magnetocaloric properties of Pr₀.₇Ca₀.₃Mn₀.₉M₀.₁O₃ perovskite manganites (M=Cr and Ni) has been carried out. Our compounds were prepared by the conventional solid-state reaction method at high temperatures. Rietveld refinement of X-ray diffraction pattern using FULLPROF method shows that all compounds adopt the orthorhombic structure with Pnma space group. The partial substitution of Mn-site drives the system from charge order state to ferromagnetic one with a Curie temperature T𝒸=150K, 118k and 116K for M=Cr and Ni, respectively. Magnetization measurements versus temperature in a magnetic applied field of 0.05T show that all our samples exhibit a paramagnetic–ferromagnetic transition with decreasing temperature. From M(H) isotherms, we have deduced the magnetic entropy change, which present maximum values of 2.37 J/kg.K and 2.94 J/kg.K, in a magnetic field change of 5T for M=Cr and Ni, respectively.

Keywords: manganites, magnetocaloric, magnetic, refrigeration

Procedia PDF Downloads 51
292 Adaptive Data Approximations Codec (ADAC) for AI/ML-based Cyber-Physical Systems

Authors: Yong-Kyu Jung

Abstract:

The fast growth in information technology has led to de-mands to access/process data. CPSs heavily depend on the time of hardware/software operations and communication over the network (i.e., real-time/parallel operations in CPSs (e.g., autonomous vehicles). Since data processing is an im-portant means to overcome the issue confronting data management, reducing the gap between the technological-growth and the data-complexity and channel-bandwidth. An adaptive perpetual data approximation method is intro-duced to manage the actual entropy of the digital spectrum. An ADAC implemented as an accelerator and/or apps for servers/smart-connected devices adaptively rescales digital contents (avg.62.8%), data processing/access time/energy, encryption/decryption overheads in AI/ML applications (facial ID/recognition).

Keywords: adaptive codec, AI, ML, HPC, cyber-physical, cybersecurity

Procedia PDF Downloads 53
291 Suitability of Black Box Approaches for the Reliability Assessment of Component-Based Software

Authors: Anjushi Verma, Tirthankar Gayen

Abstract:

Although, reliability is an important attribute of quality, especially for mission critical systems, yet, there does not exist any versatile model even today for the reliability assessment of component-based software. The existing Black Box models are found to make various assumptions which may not always be realistic and may be quite contrary to the actual behaviour of software. They focus on observing the manner in which the system behaves without considering the structure of the system, the components composing the system, their interconnections, dependencies, usage frequencies, etc.As a result, the entropy (uncertainty) in assessment using these models is much high.Though, there are some models based on operation profile yet sometimes it becomes extremely difficult to obtain the exact operation profile concerned with a given operation. This paper discusses the drawbacks, deficiencies and limitations of Black Box approaches from the perspective of various authors and finally proposes a conceptual model for the reliability assessment of software.

Keywords: black box, faults, failure, software reliability

Procedia PDF Downloads 423
290 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 119
289 Tuning Cubic Equations of State for Supercritical Water Applications

Authors: Shyh Ming Chern

Abstract:

Cubic equations of state (EoS), popular due to their simple mathematical form, ease of use, semi-theoretical nature and, reasonable accuracy are normally fitted to vapor-liquid equilibrium P-v-T data. As a result, They often show poor accuracy in the region near and above the critical point. In this study, the performance of the renowned Peng-Robinson (PR) and Patel-Teja (PT) EoS’s around the critical area has been examined against the P-v-T data of water. Both of them display large deviations at critical point. For instance, PR-EoS exhibits discrepancies as high as 47% for the specific volume, 28% for the enthalpy departure and 43% for the entropy departure at critical point. It is shown that incorporating P-v-T data of the supercritical region into the retuning of a cubic EoS can improve its performance above the critical point dramatically. Adopting a retuned acentric factor of 0.5491 instead of its genuine value of 0.344 for water in PR-EoS and a new F of 0.8854 instead of its original value of 0.6898 for water in PT-EoS reduces the discrepancies to about one third or less.

Keywords: equation of state, EoS, supercritical water, SCW

Procedia PDF Downloads 500
288 Post-Quantum Resistant Edge Authentication in Large Scale Industrial Internet of Things Environments Using Aggregated Local Knowledge and Consistent Triangulation

Authors: C. P. Autry, A. W. Roscoe, Mykhailo Magal

Abstract:

We discuss the theoretical model underlying 2BPA (two-band peer authentication), a practical alternative to conventional authentication of entities and data in IoT. In essence, this involves assembling a virtual map of authentication assets in the network, typically leading to many paths of confirmation between any pair of entities. This map is continuously updated, confirmed, and evaluated. The value of authentication along multiple disjoint paths becomes very clear, and we require analogues of triangulation to extend authentication along extended paths and deliver it along all possible paths. We discover that if an attacker wants to make an honest node falsely believe she has authenticated another, then the length of the authentication paths is of little importance. This is because optimal attack strategies correspond to minimal cuts in the authentication graph and do not contain multiple edges on the same path. The authentication provided by disjoint paths normally is additive (in entropy).

Keywords: authentication, edge computing, industrial IoT, post-quantum resistance

Procedia PDF Downloads 172
287 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 233
286 Use of Diatomite for the Elimination of Chromium Three from Wastewater Annaba, Algeria

Authors: Sabiha Chouchane, Toufik Chouchane, Azzedine Hani

Abstract:

The wastewater was treated with a natural asorbent “Diatomite” to eliminate chromium three. Diatomite is an element that comes from Sig (west of Algeria). The physicochemical characterization revealed that the diatomite is mainly made up of silica, lime and a lower degree of alumina. The process considered in static regime, at 20°C, an ion stirring speed of 150 rpm, a pH = 4 and a grain diameter of between 100 and 150µm, shows that one gram of diatomite purified can fix according to the Langmuir model up to 39.64 mg/g of chromium with pseudo 1st order kinetics. The pseudo-equilibrium time highlighted is 25 minutes. The affinity between the adsorbent and the adsorbate follows the value of the RL ratio indicates us that the solid used has a good adsorption capacity. The external transport of the metal ions from the solution to the adsorbent seems to be a step controlling the speed of the overall process. On the other hand, internal transport in the pores is not the only limiting mechanism of sorption kinetics. Thermodynamic parameters show that chromium sorption is spontaneous and exothermic with negative entropy.

Keywords: adsorption, diatomite, crIII, wastewater

Procedia PDF Downloads 23
285 Patient-Specific Modeling Algorithm for Medical Data Based on AUC

Authors: Guilherme Ribeiro, Alexandre Oliveira, Antonio Ferreira, Shyam Visweswaran, Gregory Cooper

Abstract:

Patient-specific models are instance-based learning algorithms that take advantage of the particular features of the patient case at hand to predict an outcome. We introduce two patient-specific algorithms based on decision tree paradigm that use AUC as a metric to select an attribute. We apply the patient specific algorithms to predict outcomes in several datasets, including medical datasets. Compared to the patient-specific decision path (PSDP) entropy-based and CART methods, the AUC-based patient-specific decision path models performed equivalently on area under the ROC curve (AUC). Our results provide support for patient-specific methods being a promising approach for making clinical predictions.

Keywords: approach instance-based, area under the ROC curve, patient-specific decision path, clinical predictions

Procedia PDF Downloads 449
284 Kinetic and Thermodynamics of Sorption of 5-Fluorouracil (5-Fl) on Carbon Nanotubes

Authors: Muhammad Imran Din

Abstract:

The aim of this study was to understand the interaction between multi-walled carbon nano tubes (MCNTs) and anticancer agents and evaluate the drug-loading ability of MCNTs. Batch adsorption experiments were carried out for adsorption of 5-Fluorouracil (5-FL) using MCNTs. The effect of various operating variables, viz., adsorbent dosage, pH, contact time and temperature for adsorption of 5-Fluorouracil (5-FL) has been studied. The Freundlich adsorption model was successfully employed to describe the adsorption process. It was found that the pseudo-second-order mechanism is predominant and the overall rate of the 5-Fluorouracil (5-FL) adsorption process appears to be controlled by the more than one-step. Thermodynamic parameters such as free energy change (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°) have been calculated respectively, revealed the spontaneous, endothermic and feasible nature of adsorption process. The results showed that carbon nano tubes were able to form supra molecular complexes with 5-Fluorouracil (5-FL) by π-π stacking and possessed favorable loading properties as drug carriers.

Keywords: drug, adsorption, anticancer, 5-Fluorouracil (5-FL)

Procedia PDF Downloads 337
283 Comparative Study of Water Quality Parameters in the Proximity of Various Landfills Sites in India

Authors: Abhishek N. Srivastava, Rahul Singh, Sumedha Chakma

Abstract:

The rapid urbanization in the developing countries is generating an enormous amount of waste leading to the creation of unregulated landfill sites at various places at its disposal. The liquid waste, known as leachate, produced from these landfills sites is severely affecting the surrounding water quality. The water quality in the proximity areas of the landfill is found affected by various physico-chemical parameters of leachate such as pH, alkalinity, total hardness, conductivity, chloride, total dissolved solids (TDS), total suspended solids (TSS), sulphate, nitrate, phosphate, fluoride, sodium and potassium, biological parameters such as biochemical oxygen demand (BOD), chemical oxygen demand (COD), Faecal coliform, and heavy metals such as cadmium (Cd), lead (Pb), iron (Fe), mercury (Hg), arsenic (As), cobalt (Co), manganese (Mn), zinc (Zn), copper (Cu), chromium (Cr), nickel (Ni). However, all these parameters are distributive in leachate that produced according to the nature of waste being dumped at various landfill sites, therefore, it becomes very difficult to predict the main responsible parameter of leachate for water quality contamination. The present study is endeavour the comparative analysis of the physical, chemical and biological parameters of various landfills in India viz. Okhla landfill, Ghazipur landfill, Bhalswa ladfill in NCR Delhi, Deonar landfill in Mumbai, Dhapa landfill in Kolkata and Kodungayaiyur landfill, Perungudi landfill in Chennai. The statistical analysis of the parameters was carried out using the Statistical Packages for the Social Sciences (SPSS) and LandSim 2.5 model to simulate the long term effect of various parameters on different time scale. Further, the uncertainties characterization of various input parameters has also been analysed using fuzzy alpha cut (FAC) technique to check the sensitivity of various water quality parameters at the proximity of numerous landfill sites. Finally, the study would help to suggest the best method for the prevention of pollution migration from the landfill sites on priority basis.

Keywords: landfill leachate, water quality, LandSim, fuzzy alpha cut

Procedia PDF Downloads 105
282 An Improved OCR Algorithm on Appearance Recognition of Electronic Components Based on Self-adaptation of Multifont Template

Authors: Zhu-Qing Jia, Tao Lin, Tong Zhou

Abstract:

The recognition method of Optical Character Recognition has been expensively utilized, while it is rare to be employed specifically in recognition of electronic components. This paper suggests a high-effective algorithm on appearance identification of integrated circuit components based on the existing methods of character recognition, and analyze the pros and cons.

Keywords: optical character recognition, fuzzy page identification, mutual correlation matrix, confidence self-adaptation

Procedia PDF Downloads 511
281 Exploring Socio-Economic Barriers of Green Entrepreneurship in Iran and Their Interactions Using Interpretive Structural Modeling

Authors: Younis Jabarzadeh, Rahim Sarvari, Negar Ahmadi Alghalandis

Abstract:

Entrepreneurship at both individual and organizational level is one of the most driving forces in economic development and leads to growth and competition, job generation and social development. Especially in developing countries, the role of entrepreneurship in economic and social prosperity is more emphasized. But the effect of global economic development on the environment is undeniable, especially in negative ways, and there is a need to rethink current business models and the way entrepreneurs act to introduce new businesses to address and embed environmental issues in order to achieve sustainable development. In this paper, green or sustainable entrepreneurship is addressed in Iran to identify challenges and barriers entrepreneurs in the economic and social sectors face in developing green business solutions. Sustainable or green entrepreneurship has been gaining interest among scholars in recent years and addressing its challenges and barriers need much more attention to fill the gap in the literature and facilitate the way those entrepreneurs are pursuing. This research comprised of two main phases: qualitative and quantitative. At qualitative phase, after a thorough literature review, fuzzy Delphi method is utilized to verify those challenges and barriers by gathering a panel of experts and surveying them. In this phase, several other contextually related factors were added to the list of identified barriers and challenges mentioned in the literature. Then, at the quantitative phase, Interpretive Structural Modeling is applied to construct a network of interactions among those barriers identified at the previous phase. Again, a panel of subject matter experts comprised of academic and industry experts was surveyed. The results of this study can be used by policymakers in both the public and industry sector, to introduce more systematic solutions to eliminate those barriers and help entrepreneurs overcome challenges of sustainable entrepreneurship. It also contributes to the literature as the first research in this type which deals with the barriers of sustainable entrepreneurship and explores their interaction.

Keywords: green entrepreneurship, barriers, fuzzy Delphi method, interpretive structural modeling

Procedia PDF Downloads 129
280 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India

Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit

Abstract:

Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.

Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique

Procedia PDF Downloads 104
279 A Similarity Measure for Classification and Clustering in Image Based Medical and Text Based Banking Applications

Authors: K. P. Sandesh, M. H. Suman

Abstract:

Text processing plays an important role in information retrieval, data-mining, and web search. Measuring the similarity between the documents is an important operation in the text processing field. In this project, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature the proposed measure takes the following three cases into account: (1) The feature appears in both documents; (2) The feature appears in only one document and; (3) The feature appears in none of the documents. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems, especially in banking and health sectors. The results show that the performance obtained by the proposed measure is better than that achieved by the other measures.

Keywords: document classification, document clustering, entropy, accuracy, classifiers, clustering algorithms

Procedia PDF Downloads 484
278 Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC) within Operational Research (OR) with Sustainability and Phenomenology

Authors: Hussain Abdullah Al-Salamin, Elias Ogutu Azariah Tembe

Abstract:

Supply chain (SC) is an operational research (OR) approach and technique which acts as catalyst within central nervous system of business today. Without SC, any type of business is at doldrums, hence entropy. SC is the lifeblood of business today because it is the pivotal hub which provides imperative competitive advantage. The paper present a conceptual framework dubbed as Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC).The term homomorphic is derived from abstract algebraic mathematical term homomorphism (same shape) which also embeds the following mathematical application sets: monomorphism, isomorphism, automorphisms, and endomorphism. The HCFESC is intertwined and integrated with wide and broad sets of elements.

Keywords: homomorphism, isomorphism, monomorphisms, automorphisms, epimorphisms, endomorphism, supply chain, operational research (OR)

Procedia PDF Downloads 346
277 Value Engineering Change Proposal Application in Construction of Road-Building Projects

Authors: Mohammad Mahdi Hajiali

Abstract:

Many of construction projects estimated in Iran have been influenced by the limitations of financial resources. As for Iran, a country that is developing, and to follow this development-oriented approach which many numbers of projects each year run in, if we can reduce the cost of projects by applying a method we will help greatly to minimize the cost of major construction projects and therefore projects will finish faster and more efficiently. One of the components of transportation infrastructure are roads that are considered to have a considerable share of the country budget. In addition, major budget of the related ministry is spending to repair, improve and maintain roads. Value Engineering is a simple and powerful methodology over the past six decades that has been successful in reducing the cost of many projects. Specific solution for using value engineering in the stage of project implementation is called value engineering change proposal (VECP). It was tried in this research to apply VECP in one of the road-building projects in Iran in order to enhance the value of this kind of projects and reduce their cost. In this case study after applying VECP, an idea was raised. It was about use of concrete pavement instead of hot mixed asphalt (HMA) and also using fiber in order to improve concrete pavement performance. VE group team made a decision that for choosing the best alternatives, get expert’s opinions in pavement systems and use Fuzzy TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) for ranking opinions of the experts. Finally, Jointed Plain Concrete Pavement (JPCP) was selected. Group also experimented concrete samples with available fibers in Iran and the results of experiments showed a significant increment in concrete specifications such as flexural strength. In the end, it was shown that by using of fiber-reinforced concrete pavement instead of asphalt pavement, we can achieve a significant saving in cost, time and also increment in quality, durability, and longevity.

Keywords: road-building projects, value engineering change proposal (VECP), Jointed Plain Concrete Pavement (JPCP), Fuzzy TOPSIS, fiber-reinforced concrete

Procedia PDF Downloads 161
276 Modeling Driving Distraction Considering Psychological-Physical Constraints

Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang

Abstract:

Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.

Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints

Procedia PDF Downloads 59
275 A Survey on Lossless Compression of Bayer Color Filter Array Images

Authors: Alina Trifan, António J. R. Neves

Abstract:

Although most digital cameras acquire images in a raw format, based on a Color Filter Array that arranges RGB color filters on a square grid of photosensors, most image compression techniques do not use the raw data; instead, they use the rgb result of an interpolation algorithm of the raw data. This approach is inefficient and by performing a lossless compression of the raw data, followed by pixel interpolation, digital cameras could be more power efficient and provide images with increased resolution given that the interpolation step could be shifted to an external processing unit. In this paper, we conduct a survey on the use of lossless compression algorithms with raw Bayer images. Moreover, in order to reduce the effect of the transition between colors that increase the entropy of the raw Bayer image, we split the image into three new images corresponding to each channel (red, green and blue) and we study the same compression algorithms applied to each one individually. This simple pre-processing stage allows an improvement of more than 15% in predictive based methods.

Keywords: bayer image, CFA, lossless compression, image coding standards

Procedia PDF Downloads 296
274 Performance Study of Cascade Refrigeration System Using Alternative Refrigerants

Authors: Gulshan Sachdeva, Vaibhav Jain, S. S. Kachhwaha

Abstract:

Cascade refrigeration systems employ series of single stage vapor compression units which are thermally coupled with evaporator/condenser cascades. Different refrigerants are used in each of the circuit depending on the optimum characteristics shown by the refrigerant for a particular application. In the present research study, a steady state thermodynamic model is developed which simulates the working of an actual cascade system. The model provides COP and all other system parameters like total compressor work, temperature, pressure, enthalpy and entropy at different state points. The working fluid in Low Temperature Circuit (LTC) is CO2 (R744) while ammonia (R717), propane (R290), propylene (R1270), R404A and R12 are the refrigerants in High Temperature Circuit (HTC). The performance curves of ammonia, propane, propylene, and R404A are compared with R12 to find its nearest substitute. Results show that ammonia is the best substitute of R12.

Keywords: cascade system, refrigerants, thermodynamic model, production engineering

Procedia PDF Downloads 329
273 Quantitative Comparisons of Different Approaches for Rotor Identification

Authors: Elizabeth M. Annoni, Elena G. Tolkacheva

Abstract:

Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.

Keywords: Atrial Fibrillation, Optical Mapping, Signal Processing, Rotors

Procedia PDF Downloads 302
272 Metric Suite for Schema Evolution of a Relational Database

Authors: S. Ravichandra, D. V. L. N. Somayajulu

Abstract:

Requirement of stakeholders for adding more details to the database is the main cause of the schema evolution in the relational database. Further, this schema evolution causes the instability to the database. Hence, it is aimed to define a metric suite for schema evolution of a relational database. The metric suite will calculate the metrics based on the features of the database, analyse the queries on the database and measures the coupling, cohesion and component dependencies of the schema for existing and evolved versions of the database. This metric suite will also provide an indicator for the problems related to the stability and usability of the evolved database. The degree of change in the schema of a database is presented in the forms of graphs that acts as an indicator and also provides the relations between various parameters (metrics) related to the database architecture. The acquired information is used to defend and improve the stability of database architecture. The challenges arise in incorporating these metrics with varying parameters for formulating a suitable metric suite are discussed. To validate the proposed metric suite, an experimentation has been performed on publicly available datasets.

Keywords: cohesion, coupling, entropy, metric suite, schema evolution

Procedia PDF Downloads 424
271 On Generalized Cumulative Past Inaccuracy Measure for Marginal and Conditional Lifetimes

Authors: Amit Ghosh, Chanchal Kundu

Abstract:

Recently, the notion of past cumulative inaccuracy (CPI) measure has been proposed in the literature as a generalization of cumulative past entropy (CPE) in univariate as well as bivariate setup. In this paper, we introduce the notion of CPI of order α (alpha) and study the proposed measure for conditionally specified models of two components failed at different time instants called generalized conditional CPI (GCCPI). We provide some bounds using usual stochastic order and investigate several properties of GCCPI. The effect of monotone transformation on this proposed measure has also been examined. Furthermore, we characterize some bivariate distributions under the assumption of conditional proportional reversed hazard rate model. Moreover, the role of GCCPI in reliability modeling has also been investigated for a real-life problem.

Keywords: cumulative past inaccuracy, marginal and conditional past lifetimes, conditional proportional reversed hazard rate model, usual stochastic order

Procedia PDF Downloads 226
270 Eco-Index for Assessing Ecological Disturbances at Downstream of a Hydropower Project

Authors: Chandra Upadhyaya, Arup Kumar Sarma

Abstract:

In the North Eastern part of India several hydro power projects are being proposed and execution for some of them are already initiated. There are controversies surrounding these constructions. Impact of these dams in the downstream part of the rivers needs to be assessed so that eco-system and people living downstream are protected by redesigning the projects if it becomes necessary. This may result in reducing the stresses to the affected ecosystem and people living downstream. At present many index based ecological methods are present to assess impact on ecology. However, none of these methods are capable of assessing the affect resulting from dam induced diurnal variation of flow in the downstream. We need environmental flow methodology based on hydrological index which can address the affect resulting from dam induced diurnal variation of flow and play an important role in a riverine ecosystem management and be able to provide a qualitative idea about changes in the habitat for aquatic and riparian species.

Keywords: ecosystem, environmental flow assessment, entropy, IHA, TNC

Procedia PDF Downloads 357
269 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 128