Search results for: S. Bhattacharya
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18

Search results for: S. Bhattacharya

18 Probabilistic Bhattacharya Based Active Contour Model in Structure Tensor Space

Authors: Hiren Mewada, Suprava Patnaik

Abstract:

Object identification and segmentation application requires extraction of object in foreground from the background. In this paper the Bhattacharya distance based probabilistic approach is utilized with an active contour model (ACM) to segment an object from the background. In the proposed approach, the Bhattacharya histogram is calculated on non-linear structure tensor space. Based on the histogram, new formulation of active contour model is proposed to segment images. The results are tested on both color and gray images from the Berkeley image database. The experimental results show that the proposed model is applicable to both color and gray images as well as both texture images and natural images. Again in comparing to the Bhattacharya based ACM in ICA space, the proposed model is able to segment multiple object too.

Keywords: Active Contour, Bhattacharya Histogram, Structure tensor, Image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2058
17 Studies on Various Parameters Involved in Conjugation of Starch with Lysine for Excellent Emulsification Properties Using Response Surface Methodology

Authors: Sourish Bhattacharya, Priyanka Singh

Abstract:

The process parameters, starch-water ratio (A, (w/v) %), pH of suspension (B), Temperature(C, °C) and Time (D, hrs.)., were optimized for the preparation of starch-lysine conjugate and studying their effect on stability of emulsions by calculating emulsion stability index using response surface methodology. The optimized conditions are pH 9.0, temperature 60oC, reaction time 6 hrs, starch:water ratio 1:2.5, having emulsion stability index was 0.72.

Keywords: Emulsion stability index, pH of suspension, Starch-water ratio, Temperature, Time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
16 An Efficient and Generic Hybrid Framework for High Dimensional Data Clustering

Authors: Dharmveer Singh Rajput , P. K. Singh, Mahua Bhattacharya

Abstract:

Clustering in high dimensional space is a difficult problem which is recurrent in many fields of science and engineering, e.g., bioinformatics, image processing, pattern reorganization and data mining. In high dimensional space some of the dimensions are likely to be irrelevant, thus hiding the possible clustering. In very high dimensions it is common for all the objects in a dataset to be nearly equidistant from each other, completely masking the clusters. Hence, performance of the clustering algorithm decreases. In this paper, we propose an algorithmic framework which combines the (reduct) concept of rough set theory with the k-means algorithm to remove the irrelevant dimensions in a high dimensional space and obtain appropriate clusters. Our experiment on test data shows that this framework increases efficiency of the clustering process and accuracy of the results.

Keywords: High dimensional clustering, sub-space, k-means, rough set, discernibility matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
15 Parametric Investigation of Diode and CO2 Laser in Direct Metal Deposition of H13 Tool Steel on Copper Substrate

Authors: M. Khalid Imran, Syed Masood, Milan Brandt, Sudip Bhattacharya, Jyotirmoy Mazumder

Abstract:

In the present investigation, H13 tool steel has been deposited on copper alloy substrate using both CO2 and diode laser. A detailed parametric analysis has been carried out in order to find out optimum processing zone for coating defect free H13 tool steel on copper alloy substrate. Followed by parametric optimization, the microstructure and microhardness of the deposited clads have been evaluated. SEM micrographs revealed dendritic microstructure in both clads. However, the microhardness of CO2 laser deposited clad was much higher compared to diode laser deposited clad.

Keywords: CO2 laser, Diode laser, Direct Metal Deposition, Microstructure, Microhardness, Porosity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2000
14 Solution of Optimal Reactive Power Flow using Biogeography-Based Optimization

Authors: Aniruddha Bhattacharya, Pranab Kumar Chattopadhyay

Abstract:

Optimal reactive power flow is an optimization problem with one or more objective of minimizing the active power losses for fixed generation schedule. The control variables are generator bus voltages, transformer tap settings and reactive power output of the compensating devices placed on different bus bars. Biogeography- Based Optimization (BBO) technique has been applied to solve different kinds of optimal reactive power flow problems subject to operational constraints like power balance constraint, line flow and bus voltages limits etc. BBO searches for the global optimum mainly through two steps: Migration and Mutation. In the present work, BBO has been applied to solve the optimal reactive power flow problems on IEEE 30-bus and standard IEEE 57-bus power systems for minimization of active power loss. The superiority of the proposed method has been demonstrated. Considering the quality of the solution obtained, the proposed method seems to be a promising one for solving these problems.

Keywords: Active Power Loss, Biogeography-Based Optimization, Migration, Mutation, Optimal Reactive Power Flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4277
13 Decoy-pulse Protocol for Frequency-coded Quantum Key Distribution

Authors: Sudeshna Bhattacharya, Pratyush Pandey, Pradeep Kumar K

Abstract:

We propose a decoy-pulse protocol for frequency-coded implementation of B92 quantum key distribution protocol. A direct extension of decoy-pulse method to frequency-coding scheme results in security loss as an eavesdropper can distinguish between signal and decoy pulses by measuring the carrier photon number without affecting other statistics. We overcome this problem by optimizing the ratio of carrier photon number of decoy-to-signal pulse to be as close to unity as possible. In our method the switching between signal and decoy pulses is achieved by changing the amplitude of RF signal as opposed to modulating the intensity of optical signal thus reducing system cost. We find an improvement by a factor of 100 approximately in the key generation rate using decoy-state protocol. We also study the effect of source fluctuation on key rate. Our simulation results show a key generation rate of 1.5×10-4/pulse for link lengths up to 70km. Finally, we discuss the optimum value of average photon number of signal pulse for a given key rate while also optimizing the carrier ratio.

Keywords: B92, decoy-pulse, frequency-coding, quantum key distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
12 Removal of Pb (II) from Aqueous Solutions using Fuller's Earth

Authors: Tarun Kumar Naiya, Biswajit Singha, Ashim Kumar Bhattacharya, Sudip Kumar Das

Abstract:

Fuller’s earth is a fine-grained, naturally occurring substance that has a substantial ability to adsorb impurities. In the present study Fuller’s earth has been characterized and used for the removal of Pb(II) from aqueous solution. The effect of various physicochemical parameters such as pH, adsorbent dosage and shaking time on adsorption were studied. The result of the equilibrium studies showed that the solution pH was the key factor affecting the adsorption. The optimum pH for adsorption was 5. Kinetics data for the adsorption of Pb(II) was best described by pseudo-second order model. The effective diffusion co-efficient for Pb(II) adsorption was of the order of 10-8 m2/s. The adsorption data for metal adsorption can be well described by Langmuir adsorption isotherm. The maximum uptake of metal was 103.3 mg/g of adsorbent. Mass transfer analysis was also carried out for the adsorption process. The values of mass transfer coefficients obtained from the study indicate that the velocity of the adsorbate transport from bulk to the solid phase was quite fast. The mean sorption energy calculated from Dubinin-Radushkevich isotherm indicated that the metal adsorption process was chemical in nature. 

Keywords: Fuller's earth, Pseudo second order, Mass Transfer co-efficient, Langmuir

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
11 Metallurgical Analysis of Surface Defect in Telescopic Front Fork

Authors: Souvik Das, Janak Lal, Arthita Dey, Goutam Mukhopadhyay, Sandip Bhattacharya

Abstract:

Telescopic Front Fork (TFF) used in two wheelers, mainly motorcycle, is made from high strength steel, and is manufactured by high frequency induction welding process wherein hot rolled and pickled coils are used as input raw material for rolling of hollow tubes followed by heat treatment, surface treatment, cold drawing, tempering, etc. The final application demands superior quality TFF tubes w.r.t. surface finish and dimensional tolerances. This paper presents the investigation of two different types of failure of fork during operation. The investigation consists of visual inspection, chemical analysis, characterization of microstructure, and energy dispersive spectroscopy. In this paper, comprehensive investigations of two failed tube samples were investigated. In case of Sample #1, the result revealed that there was a pre-existing crack, known as hook crack, which leads to the cracking of the tube. Metallographic examination exhibited that during field operation the pre-existing hook crack was surfaced out leading to crack in the pipe. In case of Sample #2, presence of internal oxidation with decarburised grains inside the material indicates origin of the defect from slab stage.

Keywords: Telescopic front fork, induction welding, hook crack, internal oxidation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
10 Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

Authors: Gautam Sarkar, Anjan Rakshit, Amitava Chatterjee, Kesab Bhattacharya

Abstract:

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Keywords: Capacitance measurement, NAND Schmitt trigger, microcontroller, GUI, lead compensation, hysteresis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7370
9 Surrogate based Evolutionary Algorithm for Design Optimization

Authors: Maumita Bhattacharya

Abstract:

Optimization is often a critical issue for most system design problems. Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, finding optimal solution to complex high dimensional, multimodal problems often require highly computationally expensive function evaluations and hence are practically prohibitive. The Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model presented in our earlier work [14] reduced computation time by controlled use of meta-models to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the meta-model are generated from a single uniform model. Situations like model formation involving variable input dimensions and noisy data certainly can not be covered by this assumption. In this paper we present an enhanced version of DAFHEA that incorporates a multiple-model based learning approach for the SVM approximator. DAFHEA-II (the enhanced version of the DAFHEA framework) also overcomes the high computational expense involved with additional clustering requirements of the original DAFHEA framework. The proposed framework has been tested on several benchmark functions and the empirical results illustrate the advantages of the proposed technique.

Keywords: Evolutionary algorithm, Fitness function, Optimization, Meta-model, Stochastic method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576
8 Utility of Range of Motion Measurements on Classification of Athletes

Authors: Dhiraj Dolai, Rupayan Bhattacharya

Abstract:

In this study, a comparison of Range Of Motion (ROM) of middle and long-distance runners and swimmers has been made. The mobility of the various joints is essential for the quick movement of any sportsman. Knowledge of a ROM helps in preventing injuries, in repeating the movement, and in generating speed and power. ROM varies among individuals, and it is influenced by factors such as gender, age, and whether the motion is performed actively or passively. ROM for running and swimming, both performed with due consideration on speed, plays an important role. The time of generation of speed and mobility of the particular joints are very important for both kinds of athletes. The difficulties that happen during running and swimming in the direction of motion is changed. In this study, data were collected for a total of 102 subjects divided into three groups: control group (22), middle and long-distance runners (40), and swimmers (40), and their ages are between 12 to 18 years. The swimmers have higher ROM in shoulder joint flexion, extension, abduction, and adduction movement. Middle and long-distance runners have significantly greater ROM from Control Group in the left shoulder joint flexion with a 5.82 mean difference. Swimmers have significantly higher ROM from the Control Group in the left shoulder joint flexion with 24.84 mean difference and swimmers have significantly higher ROM from the Middle and Long distance runners in left shoulder flexion with 19.02 mean difference. The picture will be clear after a more detailed investigation.

Keywords: Range of motion, runners, swimmers, significance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510
7 Improving Flash Flood Forecasting with a Bayesian Probabilistic Approach: A Case Study on the Posina Basin in Italy

Authors: Zviad Ghadua, Biswa Bhattacharya

Abstract:

The Flash Flood Guidance (FFG) provides the rainfall amount of a given duration necessary to cause flooding. The approach is based on the development of rainfall-runoff curves, which helps us to find out the rainfall amount that would cause flooding. An alternative approach, mostly experimented with Italian Alpine catchments, is based on determining threshold discharges from past events and on finding whether or not an oncoming flood has its magnitude more than some critical discharge thresholds found beforehand. Both approaches suffer from large uncertainties in forecasting flash floods as, due to the simplistic approach followed, the same rainfall amount may or may not cause flooding. This uncertainty leads to the question whether a probabilistic model is preferable over a deterministic one in forecasting flash floods. We propose the use of a Bayesian probabilistic approach in flash flood forecasting. A prior probability of flooding is derived based on historical data. Additional information, such as antecedent moisture condition (AMC) and rainfall amount over any rainfall thresholds are used in computing the likelihood of observing these conditions given a flash flood has occurred. Finally, the posterior probability of flooding is computed using the prior probability and the likelihood. The variation of the computed posterior probability with rainfall amount and AMC presents the suitability of the approach in decision making in an uncertain environment. The methodology has been applied to the Posina basin in Italy. From the promising results obtained, we can conclude that the Bayesian approach in flash flood forecasting provides more realistic forecasting over the FFG.

Keywords: Flash flood, Bayesian, flash flood guidance, FFG, forecasting, Posina.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 748
6 Enzymatic Saccharification of Dilute Alkaline Pre-treated Microalgal (Tetraselmis suecica) Biomass for Biobutanol Production

Authors: M. A. Kassim, R. Potumarthi, A. Tanksale, S. C. Srivatsa, S. Bhattacharya

Abstract:

Enzymatic saccharification of biomass for reducing sugar production is one of the crucial processes in biofuel production through biochemical conversion. In this study, enzymatic saccharification of dilute potassium hydroxide (KOH) pre-treated Tetraselmis suecica biomass was carried out by using cellulase enzyme obtained from Trichoderma longibrachiatum. Initially, the pre-treatment conditions were optimised by changing alkali reagent concentration, retention time for reaction, and temperature. The T. suecica biomass after pre-treatment was also characterized using Fourier Transform Infrared Spectra and Scanning Electron Microscope. These analyses revealed that the functional group such as acetyl and hydroxyl groups, structure and surface of T. suecica biomass were changed through pre-treatment, which is favourable for enzymatic saccharification process. Comparison of enzymatic saccharification of untreated and pre-treated microalgal biomass indicated that higher level of reducing sugar can be obtained from pre-treated T. suecica. Enzymatic saccharification of pre-treated T. suecica biomass was optimised by changing temperature, pH, and enzyme concentration to solid ratio ([E]/[S]). Highest conversion of carbohydrate into reducing sugar of 95% amounted to reducing sugar yield of 20 (wt%) from pre-treated T. suecica was obtained from saccharification, at temperature: 40°C, pH: 4.5 and [E]/[S] of 0.1 after 72 h of incubation. Hydrolysate obtained from enzymatic saccharification of pretreated T. suecica biomass was further fermented into biobutanol using Clostridium saccharoperbutyliticum as biocatalyst. The results from this study demonstrate a positive prospect of application of dilute alkaline pre-treatment to enhance enzymatic saccharification and biobutanol production from microalgal biomass.

Keywords: Microalgal biomass, enzymatic saccharification, biobutanol, fermentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2897
5 Automated Natural Hazard Zonation System with Internet-SMS Warning: Distributed GIS for Sustainable Societies Creating Schema & Interface for Mapping & Communication

Authors: Devanjan Bhattacharya, Jitka Komarkova

Abstract:

The research describes the implementation of a novel and stand-alone system for dynamic hazard warning. The system uses all existing infrastructure already in place like mobile networks, a laptop/PC and the small installation software. The geospatial dataset are the maps of a region which are again frugal. Hence there is no need to invest and it reaches everyone with a mobile. A novel architecture of hazard assessment and warning introduced where major technologies in ICT interfaced to give a unique WebGIS based dynamic real time geohazard warning communication system. A never before architecture introduced for integrating WebGIS with telecommunication technology. Existing technologies interfaced in a novel architectural design to address a neglected domain in a way never done before – through dynamically updatable WebGIS based warning communication. The work publishes new architecture and novelty in addressing hazard warning techniques in sustainable way and user friendly manner. Coupling of hazard zonation and hazard warning procedures into a single system has been shown. Generalized architecture for deciphering a range of geo-hazards has been developed. Hence the developmental work presented here can be summarized as the development of internet-SMS based automated geo-hazard warning communication system; integrating a warning communication system with a hazard evaluation system; interfacing different open-source technologies towards design and development of a warning system; modularization of different technologies towards development of a warning communication system; automated data creation, transformation and dissemination over different interfaces. The architecture of the developed warning system has been functionally automated as well as generalized enough that can be used for any hazard and setup requirement has been kept to a minimum.

Keywords: Geospatial, web-based GIS, geohazard, warning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
4 Study of Equilibrium and Mass Transfer of Co- Extraction of Different Mineral Acids with Iron(III) from Aqueous Solution by Tri-n-Butyl Phosphate Using Liquid Membrane

Authors: Diptendu Das, Vikas Kumar Rahi, V. A. Juvekar, R. Bhattacharya

Abstract:

Extraction of Fe(III) from aqueous solution using Trin- butyl Phosphate (TBP) as carrier needs a highly acidic medium (>6N) as it favours formation of chelating complex FeCl3.TBP. Similarly, stripping of Iron(III) from loaded organic solvents requires neutral pH or alkaline medium to dissociate the same complex. It is observed that TBP co-extracts acids along with metal, which causes reversal of driving force of extraction and iron(III) is re-extracted back from the strip phase into the feed phase during Liquid Emulsion Membrane (LEM) pertraction. Therefore, rate of extraction of different mineral acids (HCl, HNO3, H2SO4) using TBP with and without presence of metal Fe(III) was examined. It is revealed that in presence of metal acid extraction is enhanced. Determination of mass transfer coefficient of both acid and metal extraction was performed by using Bulk Liquid Membrane (BLM). The average mass transfer coefficient was obtained by fitting the derived model equation with experimentally obtained data. The mass transfer coefficient of the mineral acid extraction is in the order of kHNO3 = 3.3x10-6m/s > kHCl = 6.05x10-7m/s > kH2SO4 = 1.85x10-7m/s. The distribution equilibria of the above mentioned acids between aqueous feed solution and a solution of tri-n-butyl-phosphate (TBP) in organic solvents have been investigated. The stoichiometry of acid extraction reveals the formation of TBP.2HCl, HNO3.2TBP, and TBP.H2SO4 complexes. Moreover, extraction of Iron(III) by TBP in HCl aqueous solution forms complex FeCl3.TBP.2HCl while in HNO3 medium forms complex 3FeCl3.TBP.2HNO3

Keywords: Bulk Liquid Membrane (BLM) Transport, Iron(III) extraction, Tri-n-butyl Phosphate, Mass Transfer coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2588
3 Authentic Learning for Computer Network with Mobile Device-Based Hands-On Labware

Authors: Kai Qian, Ming Yang, Minzhe Guo, Prabir Bhattacharya, Lixin Tao

Abstract:

Computer network courses are essential parts of college computer science curriculum and hands-on networking experience is well recognized as an effective approach to help students understand better about the network concepts, the layered architecture of network protocols, and the dynamics of the networks. However, existing networking labs are usually server-based and relatively cumbersome, which require a certain level of specialty and resource to set up and maintain the lab environment. Many universities/colleges lack the resources and build-ups in this field and have difficulty to provide students with hands-on practice labs. A new affordable and easily-adoptable approach to networking labs is desirable to enhance network teaching and learning. In addition, current network labs are short on providing hands-on practice for modern wireless and mobile network learning. With the prevalence of smart mobile devices, wireless and mobile network are permeating into various aspects of our information society. The emerging and modern mobile technology provides computer science students with more authentic learning experience opportunities especially in network learning. A mobile device based hands-on labware can provide an excellent ‘real world’ authentic learning environment for computer network especially for wireless network study. In this paper, we present our mobile device-based hands-on labware (series of lab module) for computer network learning which is guided by authentic learning principles to immerse students in a real world relevant learning environment. We have been using this labware in teaching computer network, mobile security, and wireless network classes. The student feedback shows that students can learn more when they have hands-on authentic learning experience. 

Keywords: Mobile computing, android, network, labware.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2074
2 Meta Model Based EA for Complex Optimization

Authors: Maumita Bhattacharya

Abstract:

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency

Keywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
1 Assessment of Socio-Cultural Sustainability: A Comparative Analysis of Two Neighborhoods in Kolkata Metropolitan Area

Authors: Tanima Bhattacharya, Joy Sen

Abstract:

To transform a space into a better livable and sustainable zone, United Nations Summit in New York 2015, has decided upon 17 sustainable development goals (SDGs) that approach directly to achieve inclusive, people-centric, sustainable developments. Though sustainability has been majorly constructed by four pillars, namely, Ecological, Economic, Social and Cultural, but it is essentially reduced to economic and ecological consideration in the context of developing countries. Therefore, in most cases planning has reduced its ambit to concentrate around the tangible infrastructure, ignoring the fundamentals of socio-cultural heritage. With the accentuating hype of infrastructural augmentation, lack of emphasis of traditional concerns like ethnicity and social connection have further diluted the situation, disintegrating cultural continuity. As cultural continuity lacks its cohesion, it’s growing absence increasingly acts as a catalyst to degrade the heritage structures, spaces around and linking these structures, and the ability of stakeholders in identifying themselves rooted in that particular space. Hence, this paper will argue that sustainability depends on the people and their interaction with their surroundings, their culture and livelihood. The interaction between people and their surroundings strengthen community building and social interaction that abides by stakeholders reverting back to their roots. To assess the socio-cultural sustainability of the city of Kolkata, two study areas are selected, namely, an old settlement from the northern part of the city of Kolkata (KMA), imbued with social connection, age-old cultural and ethnic bonding and, another cluster of new high-rises coming up in the Newtown area having portions of planned city extension on the eastern side of the city itself. Whereas, Newtown prioritizes the surging post-industrial trends of economic aspiration and ecological aspects of urban sustainability; the former settlements of northern Kolkata still continue to represent the earliest community settlement of the British-colonial-cum native era and even the pre-colonial era, permeated with socio-cultural reciprocation. Thus, to compare and assess the inlayed organizational structure of both the spaces in the two cases, selected areas have been surveyed to portray their current imageability. The argument of this paper is structured in 5parts. First, an introduction of the idea has been forwarded, Secondly, a literature review has been conducted to ground the proposed ideas, Thirdly, methodology has been discussed and appropriate case study areas have been selected, Fourthly, surveys and analyses has been forwarded and lastly, the paper has arrived at a set of conclusions by suggesting a threefold development to create happy, healthy and sustainable community.

Keywords: Art innovation, current scenario assessment, heritage, imageability, socio-cultural sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830