Search results for: oriented network
215 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti
Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms
Abstract:
Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing
Procedia PDF Downloads 122214 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region
Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho
Abstract:
The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon
Procedia PDF Downloads 66213 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78212 Morphology, Qualitative, and Quantitative Elemental Analysis of Pheasant Eggshells in Thailand
Authors: Kalaya Sribuddhachart, Mayuree Pumipaiboon, Mayuva Youngsabanant-Areekijseree
Abstract:
The ultrastructure of 20 species of pheasant eggshells in Thailand, (Simese Fireback, Lophura diardi), (Silver Pheasant, Lophura nycthemera), (Kalij Pheasant, Lophura leucomelanos crawfurdii), (Kalij Pheasant, Lophura leucomelanos lineata), (Red Junglefowl, Gallus gallus spadiceus), (Crested Fireback, Lophura ignita rufa), (Green Peafowl, Pavo muticus), (Indian Peafowl, Pavo cristatus), (Grey Peacock Pheasant, Polyplectron bicalcaratum bicalcaratum), (Lesser Bornean Fireback, Lophura ignita ignita), (Green Junglefowl, Gallus varius), (Hume's Pheasant, Syrmaticus humiae humiae), (Himalayan Monal, Lophophorus impejanus), Golden Pheasant, Chrysolophus pictus, (Ring-Neck Pheasant, Phasianus sp.), (Reeves’s Pheasant, Syrmaticus reevesi), (Polish Chicken, Gallus sp.), (Brahma Chicken, Gallus sp.), (Yellow Golden Pheasant, Chrysolophus pictus luteus), and (Lady Amhersts Pheasant, Chrysolophus amherstiae) were studied by Secondary electron imaging (SEI) and Energy dispersive X-ray analysis (EDX) detectors of scanning electron microscope. Generally, all pheasant eggshells showed 3 layers of cuticle, palisade, and mammillary. The total thickness was ranging from 190.28±5.94-838.96±16.31µm. The palisade layer is the most thickness layer following by mammillary and cuticle layers. The palisade layer in all pheasant eggshells consisted of numerous vesicle holes that were firmly forming as network thorough the layer. The vesicle holes in all pheasant eggshells had difference porosity ranging from 0.44±0.11-0.23±0.05 µm. While the mammillary layer was the most compact layer with a variable shape (broad-base V and U-shape) connect to shell membrane. Elemental analysis by of 20 specie eggshells showed 9 apparent elements including carbon (C), oxygen (O), calcium (Ca), phosphorous (P), sulfur (S), magnesium (Mg), silicon (Si), aluminum (Al), and copper (Cu) at the percentage of 28.90- 8.33%, 60.64-27.61%, 55.30-14.49%, 1.97-0.03%, 0.08-0.03%, 0.50-0.16%, 0.30-0.04%, 0.06-0.02%, and 2.67-1.73%, respectively. It was found that Ca, C, and O showed highest elemental compositions, which essential for pheasant embryonic development, mainly presented as composited structure of calcium carbonate (CaCO3) more than 97%. Meanwhile, Mg, S, Si, Al, and P were major inorganic constituents of the eggshells which directly related to an increase of the shell hardness. Finally, the percentage of heavy metal copper (Cu) has been observed in 4 eggshell species. There are Golden Pheasant (2.67±0.16%), Indian Peafowl (2.61±0.13%), Green Peafowl (1.97±0.74%), and Silver Pheasant (1.73±0.11%), respectively. A non-significant difference was found in the percentages of 9 elements in all pheasant eggshells. This study is useful to provide the information of biology and taxonomic of pheasant study in Thailand for conservation.Keywords: pheasants eggshells, secondary electron imaging (SEI) and energy dispersive X-ray analysis (EDX), morphology, Thailand
Procedia PDF Downloads 234211 The Rise of Blue Water Navy and its Implication for the Region
Authors: Riddhi Chopra
Abstract:
Alfred Thayer Mahan described the sea as a ‘great common,’ which would serve as a medium for communication, trade, and transport. The seas of Asia are witnessing an intriguing historical anomaly – rise of an indigenous maritime power against the backdrop of US domination over the region. As China transforms from an inward leaning economy to an outward-leaning economy, it has become increasingly dependent on the global sea; as a result, we witness an evolution in its maritime strategy from near seas defense to far seas deployment strategies. It is not only patrolling the international waters but has also built a network of civilian and military infrastructure across the disputed oceanic expanse. The paper analyses the reorientation of China from a naval power to a blue water navy in an era of extensive globalisation. The actions of the Chinese have created a zone of high alert amongst its neighbors such as Japan, Philippines, Vietnam and North Korea. These nations are trying to align themselves so as to counter China’s growing brinkmanship, but China has been pursuing claims through a carefully calibrated strategy in the region shunning any coercive measures taken by other forces. If China continues to expand its maritime boundaries, its neighbors – all smaller and weaker Asian nations would be limited to a narrow band of the sea along its coastlines. Hence it is essential for the US to intervene and support its allies to offset Chinese supremacy. The paper intends to provide a profound analysis over the disputes in South China Sea and East China Sea focusing on Philippines and Japan respectively. Moreover, the paper attempts to give an account of US involvement in the region and its alignment with its South Asian allies. The geographic dynamics is said the breed a national coalition dominating the strategic ambitions of China as well as the weak littoral states. China has conducted behind the scenes diplomacy trying to persuade its neighbors to support its position on the territorial disputes. These efforts have been successful in creating fault lines in ASEAN thereby undermining regional integrity to reach a consensus on the issue. Chinese diplomatic efforts have also forced the US to revisit its foreign policy and engage with players like Cambodia and Laos. The current scenario in the SCS points to a strong Chinese hold trying to outspace all others with no regards to International law. Chinese activities are in contrast with US principles like Freedom of Navigation thereby signaling US to take bold actions to prevent Chinese hegemony in the region. The paper ultimately seeks to explore the changing power dynamics among various claimants where a rival superpower like US can pursue the traditional policy of alliance formation play a decisive role in changing the status quo in the arena, consequently determining the future trajectory.Keywords: China, East China Sea, South China Sea, USA
Procedia PDF Downloads 240210 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines
Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky
Abstract:
Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods
Procedia PDF Downloads 112209 Geomorphology and Flood Analysis Using Light Detection and Ranging
Authors: George R. Puno, Eric N. Bruno
Abstract:
The natural landscape of the Philippine archipelago plus the current realities of climate change make the country vulnerable to flood hazards. Flooding becomes the recurring natural disaster in the country resulting to lose of lives and properties. Musimusi is among the rivers which exhibited inundation particularly at the inhabited floodplain portion of its watershed. During the event, rescue operations and distribution of relief goods become a problem due to lack of high resolution flood maps to aid local government unit identify the most affected areas. In the attempt of minimizing impact of flooding, hydrologic modelling with high resolution mapping is becoming more challenging and important. This study focused on the analysis of flood extent as a function of different geomorphologic characteristics of Musimusi watershed. The methods include the delineation of morphometric parameters in the Musimusi watershed using Geographic Information System (GIS) and geometric calculations tools. Digital Terrain Model (DTM) as one of the derivatives of Light Detection and Ranging (LiDAR) technology was used to determine the extent of river inundation involving the application of Hydrologic Engineering Center-River Analysis System (HEC-RAS) and Hydrology Modelling System (HEC-HMS) models. The digital elevation model (DEM) from synthetic Aperture Radar (SAR) was used to delineate watershed boundary and river network. Datasets like mean sea level, river cross section, river stage, discharge and rainfall were also used as input parameters. Curve number (CN), vegetation, and soil properties were calibrated based on the existing condition of the site. Results showed that the drainage density value of the watershed is low which indicates that the basin is highly permeable subsoil and thick vegetative cover. The watershed’s elongation ratio value of 0.9 implies that the floodplain portion of the watershed is susceptible to flooding. The bifurcation ratio value of 2.1 indicates higher risk of flooding in localized areas of the watershed. The circularity ratio value (1.20) indicates that the basin is circular in shape, high discharge of runoff and low permeability of the subsoil condition. The heavy rainfall of 167 mm brought by Typhoon Seniang last December 29, 2014 was characterized as high intensity and long duration, with a return period of 100 years produced 316 m3s-1 outflows. Portion of the floodplain zone (1.52%) suffered inundation with 2.76 m depth at the maximum. The information generated in this study is helpful to the local disaster risk reduction management council in monitoring the affected sites for more appropriate decisions so that cost of rescue operations and relief goods distribution is minimized.Keywords: flooding, geomorphology, mapping, watershed
Procedia PDF Downloads 229208 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 342207 Developing a High Performance Cement Based Material: The Influence of Silica Fume and Organosilane
Authors: Andrea Cretu, Calin Cadar, Maria Miclaus, Lucian Barbu-Tudoran, Siegfried Stapf, Ioan Ardelean
Abstract:
Additives and mineral admixtures have become an integral part of cement-based materials. It is common practice to add silica fume to cement based mixes in order to produce high-performance concrete. There is still a lack of scientific understanding regarding the effects that silica fume has on the microstructure of hydrated cement paste. The aim of the current study is to develop high-performance materials with low permeability and high resistance to flexural stress using silica fume and an organosilane. Organosilane bonds with cement grains and silica fume, influencing both the workability and the final properties of the mix, especially the pore size distributions and pore connectivity. Silica fume is a known pozzolanic agent which reacts with the calcium hydroxide in hydrated cement paste, producing more C-S-H and improving the mechanical properties of the mix. It is believed that particles of silica fume act as capillary pore fillers and nucleation centers for C-S-H and other hydration products. In order to be able to design cement-based materials with added silica fume and organosilane, it is necessary first to understand the formation of the porous network during hydration and to observe the distribution of pores and their connectivity. Nuclear magnetic resonance (NMR) methods in low-fields are non-destructive and allow the study of cement-based materials from the standpoint of their porous structure. Other methods, such as XRD and SEM-EDS, help create a comprehensive picture of the samples, along with the classic mechanical tests (compressive and flexural strength measurements). The transverse relaxation time (T₂) was measured during the hydration of 16 samples prepared with two water/cement ratios (0.3 and 0.4) and different concentrations or organosilane (APTES, up to 2% by mass of cement) and silica fume (up to 6%). After their hydration, the pore size distribution was assessed using the same NMR approach on the samples filled with cyclohexane. The SEM-EDS and XRD measurements were applied on pieces and powders prepared from the samples that were used in mechanical testing, which were kept under water for 28 days. Adding silica fume does not influence the hydration dynamics of cement paste, while the addition of organosilane extends the dormancy stage up to 10 hours. The size distribution of the capillary pores is not influenced by the addition of silica fume or organosilane, while the connectivity of capillary pores is decreased only when there is organosilane in the mix. No filling effect is observed even at the highest concentration of silica fume. There is an apparent increase in flexural strength of samples prepared only with silica fume and a decrease for those prepared with organosilane, with a few exceptions. XRD reveals that the pozzolanic reactivity of silica fume can only be observed when there is no organosilane present and the SEM-EDS method reveals the pore distribution, as well as hydration products and the presence or absence of calcium hydroxide. The current work was funded by the Romanian National Authority for Scientific Research, CNCS – UEFISCDI, through project PN-III-P2-2.1-PED-2016-0719.Keywords: cement hydration, concrete admixtures, NMR, organosilane, porosity, silica fume
Procedia PDF Downloads 160206 Life Cycle Assessment Applied to Supermarket Refrigeration System: Effects of Location and Choice of Architecture
Authors: Yasmine Salehy, Yann Leroy, Francois Cluzel, Hong-Minh Hoang, Laurence Fournaison, Anthony Delahaye, Bernard Yannou
Abstract:
Taking into consideration all the life cycle of a product is now an important step in the eco-design of a product or a technology. Life cycle assessment (LCA) is a standard tool to evaluate the environmental impacts of a system or a process. Despite the improvement in refrigerant regulation through protocols, the environmental damage of refrigeration systems remains important and needs to be improved. In this paper, the environmental impacts of refrigeration systems in a typical supermarket are compared using the LCA methodology under different conditions. The system is used to provide cold at two levels of temperature: medium and low temperature during a life period of 15 years. The most commonly used architectures of supermarket cold production systems are investigated: centralized direct expansion systems and indirect systems using a secondary loop to transport the cold. The variation of power needed during seasonal changes and during the daily opening/closure periods of the supermarket are considered. R134a as the primary refrigerant fluid and two types of secondary fluids are considered. The composition of each system and the leakage rate of the refrigerant through its life cycle are taken from the literature and industrial data. Twelve scenarios are examined. They are based on the variation of three parameters, 1. location: France (Paris), Spain (Toledo) and Sweden (Stockholm), 2. different sources of electric consumption: photovoltaic panels and low voltage electric network and 3. architecture: direct and indirect refrigeration systems. OpenLCA, SimaPro softwares, and different impact assessment methods were compared; CML method is used to evaluate the midpoint environmental indicators. This study highlights the significant contribution of electric consumption in environmental damages compared to the impacts of refrigerant leakage. The secondary loop allows lowering the refrigerant amount in the primary loop which results in a decrease in the climate change indicators compared to the centralized direct systems. However, an exhaustive cost evaluation (CAPEX and OPEX) of both systems shows more important costs related to the indirect systems. A significant difference between the countries has been noticed, mostly due to the difference in electric production. In Spain, using photovoltaic panels helps to reduce efficiently the environmental impacts and the related costs. This scenario is the best alternative compared to the other scenarios. Sweden is a country with less environmental impacts. For both France and Sweden, the use of photovoltaic panels does not bring a significant difference, due to a less sunlight exposition than in Spain. Alternative solutions exist to reduce the impact of refrigerating systems, and a brief introduction is presented.Keywords: eco-design, industrial engineering, LCA, refrigeration system
Procedia PDF Downloads 187205 Altmetrics of South African Journals: Implications for Scholarly Impact of South African Research on Social Media
Authors: Omwoyo Bosire Onyancha
Abstract:
The Journal Citation Reports (JCR) of the Thomson Reuters has, for decades, provided the data for bibliometrically assessing the impact of journals. In their criticism of the journal impact factor (JIF), a number of scholars such as Priem, Taraborelli, Groth and Neylon (2010) observe that the “JIF is often incorrectly used to assess the impact of individual articles. It is troubling that the exact details of the JIF are a trade secret, and that significant gaming is relatively easy”. The emergence of alternative metrics (Altmetrics) has introduced another dimension of re-assessing how the impact of journals (and other units such as articles and even individual researchers) can be measured. Altmetrics is premised upon the fact that research is increasingly being disseminated through social network sites such as ResearchGate, Mendeley, Twitter, Facebook, LinkedIn, and ImpactStory, among others. This paper adopts informetrics (including altmetrics) techniques to report on the findings of a study conducted to investigate and compare the social media impact of 274 South Africa Post Secondary Education (SAPSE)-accredited journals, which are recognized and accredited by the Department of Higher Education and Training (DHET) of South Africa (SA). We used multiple sources to extract data for the study, namely Altmetric.com and the Thomson Reuters’ Journal Citation Reports. Data was analyzed in order to determine South African journals’ presence and impact on social media as well as contrast the social media impact with Thomson Reuters’ citation impact. The Spearman correlation test was performed to compare the journals’ social media impact and JCR citation impact. Preliminary findings reveal that a total of 6360 articles published in 96 South African journals have received some attention in social media; the most commonly used social media platform was Twitter, followed by Mendeley, Facebook, News outlets, and CiteULike; there were 29 SA journals covered in the JCR in 2008 and this number has grown to 53 journals in 2014; the journals indexed in the Thomson Reuters performed much better, in terms of their altmetrics, than those journals that are not indexed in Thomson Reuters databases; nevertheless, there was high correlation among journals that featured in both datasets; the journals with the highest scores in Altmetric.com included the South African Medical Journal, African Journal of Marine Science, and Transactions of the Royal Society of South Africa while the journals with high impact factors in JCR were South African Medical Journal, Onderstepoort: Journal of Veterinary Research, and Sahara: Journal of Social Aspects of HIV-AIDS; and that Twitter has emerged as a strong avenue of sharing and communicating research published in the South African journals. Implications of the results of the study for the dissemination of research conducted in South Africa are offered. Discussions based on the research findings as well as conclusions and recommendations are offered in the full text paper.Keywords: altmetrics, citation impact, journal citation reports, journal impact factor, journals, research, scholarly publishing, social media impact, South Africa
Procedia PDF Downloads 203204 The Effect of Mindfulness-Based Interventions for Individuals with Tourette Syndrome: A Scoping Review
Authors: Ilana Singer, Anastasia Lučić, Julie Leclerc
Abstract:
Introduction: Tics, characterized by repetitive, sudden, non-voluntary motor movements or vocalizations, are prevalent in chronic tic disorder (CT) and Tourette Syndrome (TS). These neurodevelopmental disorders often coexist with various psychiatric conditions, leading to challenges and reduced quality of life. While medication in conjunction with behavioral interventions, such as Habit Reversal Training (HRT), Exposure Response Prevention (ERP), and Comprehensive Behavioral Intervention for Tics (CBIT), has shown efficacy, a significant proportion of patients experience persistent tics. Thus, innovative treatment approaches are necessary to improve therapeutic outcomes, such as mindfulness-based approaches. Nonetheless, the effectiveness of mindfulness-based interventions in the context of CT and TS remains understudied. Objective: The objective of this scoping review is to provide an overview of the current state of research on mindfulness-based interventions for CT and TS, identify knowledge and evidence gaps, discuss the effectiveness of mindfulness-based interventions with other treatment options, and discuss implications for clinical practice and policy development. Method: Using guidelines from Peters (2020) and the PRISMA-ScR, a scoping review was conducted. Multiple electronic databases were searched from inception until June 2023, including MEDLINE, EMBASE, PsychInfo, Global Health, PubMed, Web of Science, and Érudit. Inclusion criteria were applied to select relevant studies, and data extraction was independently performed by two reviewers. Results: Five papers were included in the study. Firstly, we found that mindfulness interventions were found to be effective in reducing anxiety and depression while enhancing overall well-being in individuals with tics. Furthermore, the review highlighted the potential role of mindfulness in enhancing functional connectivity within the Default Mode Network (DMN) as a compensatory function in TS patients. This suggests that mindfulness interventions may complement and support traditional therapeutic approaches, particularly HRT, by positively influencing brain networks associated with tic regulation and control. Conclusion: This scoping review contributes to the understanding of the effectiveness of mindfulness-based interventions in managing CT and TS. By identifying research gaps, this review can guide future investigations and interventions to improve outcomes for individuals with CT or TS. Overall, these findings emphasize the potential benefits of incorporating mindfulness-based interventions as a smaller subset within comprehensive treatment strategies. However, it is essential to acknowledge the limitations of this scoping review, such as the exclusion of a pre-established protocol and the limited number of studies available for inclusion. Further research and clinical exploration are necessary to better understand the specific mechanisms and optimal integration of mindfulness-based interventions with existing behavioral interventions for this population.Keywords: scoping reviews, Tourette Syndrome, tics, mindfulness-based, therapy, intervention
Procedia PDF Downloads 83203 ENDO-β-1,4-Xylanase from Thermophilic Geobacillus stearothermophilus: Immobilization Using Matrix Entrapment Technique to Increase the Stability and Recycling Efficiency
Authors: Afsheen Aman, Zainab Bibi, Shah Ali Ul Qader
Abstract:
Introduction: Xylan is a heteropolysaccharide composed of xylose monomers linked together through 1,4 linkages within a complex xylan network. Owing to wide applications of xylan hydrolytic products (xylose, xylobiose and xylooligosaccharide) the researchers are focusing towards the development of various strategies for efficient xylan degradation. One of the most important strategies focused is the use of heat tolerant biocatalysts which acts as strong and specific cleaving agents. Therefore, the exploration of microbial pool from extremely diversified ecosystem is considerably vital. Microbial populations from extreme habitats are keenly explored for the isolation of thermophilic entities. These thermozymes usually demonstrate fast hydrolytic rate, can produce high yields of product and are less prone to microbial contamination. Another possibility of degrading xylan continuously is the use of immobilization technique. The current work is an effort to merge both the positive aspects of thermozyme and immobilization technique. Methodology: Geobacillus stearothermophilus was isolated from soil sample collected near the blast furnace site. This thermophile is capable of producing thermostable endo-β-1,4-xylanase which cleaves xylan effectively. In the current study, this thermozyme was immobilized within a synthetic and a non-synthetic matrice for continuous production of metabolites using entrapment technique. The kinetic parameters of the free and immobilized enzyme were studied. For this purpose calcium alginate and polyacrylamide beads were prepared. Results: For the synthesis of immobilized beads, sodium alginate (40.0 gL-1) and calcium chloride (0.4 M) was used amalgamated. The temperature (50°C) and pH (7.0) optima of immobilized enzyme remained same for xylan hydrolysis however, the enzyme-substrate catalytic reaction time raised from 5.0 to 30.0 minutes as compared to free counterpart. Diffusion limit of high molecular weight xylan (corncob) caused a decline in Vmax of immobilized enzyme from 4773 to 203.7 U min-1 whereas, Km value increased from 0.5074 to 0.5722 mg ml-1 with reference to free enzyme. Immobilized endo-β-1,4-xylanase showed its stability at high temperatures as compared to free enzyme. It retained 18% and 9% residual activity at 70°C and 80°C, respectively whereas; free enzyme completely lost its activity at both temperatures. The Immobilized thermozyme displayed sufficient recycling efficiency and can be reused up to five reaction cycles, indicating that this enzyme can be a plausible candidate in paper processing industry. Conclusion: This thermozyme showed better immobilization yield and operational stability with the purpose of hydrolyzing the high molecular weight xylan. However, the enzyme immobilization properties can be improved further by immobilizing it on different supports for industrial purpose.Keywords: immobilization, reusability, thermozymes, xylanase
Procedia PDF Downloads 373202 Municipal Solid Waste Management in an Unplanned Hill Station in India
Authors: Moanaro Ao, Nzanthung Ngullie
Abstract:
Municipal solid waste management (MSWM) has unique challenges in hilly urban settlements. Efforts have been taken by municipalities, private players, non-governmental organizations, etc. for managing solid waste by preventing its generation, reusing, and recovering them into useful products to the extent possible, thereby minimizing its impact on the environment and human health. However, there are many constraints that lead to inadequate management of solid waste. Kohima is an unplanned hill station city in the North Eastern Region of India. The city is facing numerous issues due to the mismanagement of the MSW generated. Kohima Municipal Council (KMC) is the Urban Local Body (ULB) responsible for providing municipal services. The present MSWM system in Kohima comprises of collection, transportation, and disposal of waste without any treatment. Several efforts and experimental projects on waste management have been implemented without any success. Waste management in Kohima city is challenging due to its remote location, difficult topography, dispersed settlements within the city, sensitive ecosystem, etc. Furthermore, the narrow road network in Kohima with limited scope for expansion, inadequate infrastructure facilities, and financial constraints of the ULB add up to the problems faced in managing solid waste. This hill station also has a unique system of traditional local self-governance. Thus, shifting from a traditional system to a modern system in implementing systematic and scientific waste management is also a challenge in itself. This study aims to analyse the existing situation of waste generation, evaluate the effectiveness of the existing management system of MSW, and evolve a strategic approach to achieve a sustainable and resilient MSWM system. The results from the study show that a holistic approach, including social aspects, technical aspects, environmental aspects, and financial aspects, is needed to reform the MSWM system. Stringent adherence to source segregation is required by encouraging public participation through awareness programs. Active involvement of community-based organizations (CBOs) has brought a positive change in sensitizing the public. A waste management model was designed to be adopted at a micro-level such as composting household biodegradable waste and incinerator plants at the community level for non-biodegradable waste. Suitable locations for small waste stations were identified using geographical information system (GIS) tools for waste recovery and recycling. Inculcating the sense of responsibility in every waste generator towards waste management by implementing incentive-based strategies at the Ward level was explored. Initiatives based on the ‘polluters pay principle’ were also explored to make the solid waste management model “self-sustaining”.Keywords: municipal solid waste management, public participation, source segregation, sustainable
Procedia PDF Downloads 68201 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry
Authors: C. A. Barros, Ana P. Barroso
Abstract:
Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis
Procedia PDF Downloads 213200 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin
Authors: Julio Jesus Salazar, Julio Jesus De Lama
Abstract:
the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.Keywords: hydrology, internet of things, machine learning, river basin
Procedia PDF Downloads 158199 Influence of Kneading Conditions on the Textural Properties of Alumina Catalysts Supports for Hydrotreating
Authors: Lucie Speyer, Vincent Lecocq, Séverine Humbert, Antoine Hugon
Abstract:
Mesoporous alumina is commonly used as a catalyst support for the hydrotreating of heavy petroleum cuts. The process of fabrication usually involves: the synthesis of the boehmite AlOOH precursor, a kneading-extrusion step, and a calcination in order to obtain the final alumina extrudates. Alumina is described as a complex porous medium, generally agglomerates constituted of aggregated nanocrystallites. Its porous texture directly influences the active phase deposition and mass transfer, and the catalytic properties. Then, it is easy to figure out that each step of the fabrication of the supports has a role on the building of their porous network, and has to be well understood to optimize the process. The synthesis of boehmite by precipitation of aluminum salts was extensively studied in the literature and the effect of various parameters, such as temperature or pH, are known to influence the size and shape of the crystallites and the specific surface area of the support. The calcination step, through the topotactic transition from boehmite to alumina, determines the final properties of the support and can tune the surface area, pore volume and pore diameters from those of boehmite. However, the kneading extrusion step has been subject to a very few studies. It generally consists in two steps: an acid, then a basic kneading, where the boehmite powder is introduced in a mixer and successively added with an acid and a base solution to form an extrudable paste. During the acid kneading, the induced positive charges on the hydroxyl surface groups of boehmite create an electrostatic repulsion which tends to separate the aggregates and even, following the conditions, the crystallites. The basic kneading, by reducing the surface charges, leads to a flocculation phenomenon and can control the reforming of the overall structure. The separation and reassembling of the particles constituting the boehmite paste have a quite obvious influence on the textural properties of the material. In this work, we are focused on the influence of the kneading step on the alumina catalysts supports. Starting from an industrial boehmite, extrudates are prepared using various kneading conditions. The samples are studied by nitrogen physisorption in order to analyze the evolution of the textural properties, and by synchrotron small-angle X-ray scattering (SAXS), a more original method which brings information about agglomeration and aggregation of the samples. The coupling of physisorption and SAXS enables a precise description of the samples, as same as an accurate monitoring of their evolution as a function of the kneading conditions. These ones are found to have a strong influence of the pore volume and pore size distribution of the supports. A mechanism of evolution of the texture during the kneading step is proposed and could be attractive in order to optimize the texture of the supports and then, their catalytic performances.Keywords: alumina catalyst support, kneading, nitrogen physisorption, small-angle X-ray scattering
Procedia PDF Downloads 252198 Developing Dynamic Capabilities: The Case of Western Subsidiaries in Emerging Market
Authors: O. A. Adeyemi, M. O. Idris, W. A. Oke, O. T. Olorode, S. O. Alayande, A. E. Adeoye
Abstract:
The purpose of this paper is to investigate the process of capability building at subsidiary level and the challenges to such process. The relevance of external factors for capability development, have not been explicitly addressed in empirical studies. Though, internal factors, acting as enablers, have been more extensively studied. With reference to external factors, subsidiaries are actively influenced by specific characteristics of the host country, implying a need to become fully immersed in local culture and practices. Specifically, in MNCs, there has been a widespread trend in management practice to increase subsidiary autonomy, with subsidiary managers being encouraged to act entrepreneurially, and to take advantage of host country specificity. As such, it could be proposed that: P1: The degree at which subsidiary management is connected to the host country, will positively influence the capability development process. Dynamic capabilities reside to a large measure with the subsidiary management team, but are impacted by the organizational processes, systems and structures that the MNC headquarter has designed to manage its business. At the subsidiary level, the weight of the subsidiary in the network, its initiative-taking and its profile building increase the supportive attention of the HQs and are relevant to the success of the process of capability building. Therefore, our second proposition is that: P2: Subsidiary role and HQ support are relevant elements in capability development at the subsidiary level. Design/Methodology/Approach: This present study will adopt the multiple case studies approach. That is because a case study research is relevant when addressing issues without known empirical evidences or with little developed prior theory. The key definitions and literature sources directly connected with operations of western subsidiaries in emerging markets, such as China, are well established. A qualitative approach, i.e., case studies of three western subsidiaries, will be adopted. The companies have similar products, they have operations in China, and both of them are mature in their internationalization process. Interviews with key informants, annual reports, press releases, media materials, presentation material to customers and stakeholders, and other company documents will be used as data sources. Findings: Western Subsidiaries in Emerging Market operate in a way substantially different from those in the West. What are the conditions initiating the outsourcing of operations? The paper will discuss and present two relevant propositions guiding that process. Practical Implications: MNCs headquarter should be aware of the potential for capability development at the subsidiary level. This increased awareness could induce consideration in headquarter about the possible ways of encouraging such known capability development and how to leverage these capabilities for better MNC headquarter and/or subsidiary performance. Originality/Value: The paper is expected to contribute on the theme: drivers of subsidiary performance with focus on emerging market. In particular, it will show how some external conditions could promote a capability-building process within subsidiaries.Keywords: case studies, dynamic capability, emerging market, subsidiary
Procedia PDF Downloads 122197 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico
Authors: Ismene Ithai Bras-Ruiz
Abstract:
Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise
Procedia PDF Downloads 127196 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 26195 Pesticides Monitoring in Surface Waters of the São Paulo State, Brazil
Authors: Fabio N. Moreno, Letícia B. Marinho, Beatriz D. Ruiz, Maria Helena R. B. Martins
Abstract:
Brazil is a top consumer of pesticides worldwide, and the São Paulo State is one of the highest consumers among the Brazilian federative states. However, representative data about the occurrence of pesticides in surface waters of the São Paulo State is scarce. This paper aims to present the results of pesticides monitoring executed within the Water Quality Monitoring Network of CETESB (The Environmental Agency of the São Paulo State) between the 2018-2022 period. Surface water sampling points (21 to 25) were selected within basins of predominantly agricultural land-use (5 to 85% of cultivated areas). The samples were collected throughout the year, including high-flow and low-flow conditions. The frequency of sampling varied between 6 to 4 times per year. Selection of pesticide molecules for monitoring followed a prioritizing process from EMBRAPA (Brazilian Agricultural Research Corporation) databases of pesticide use. Pesticides extractions in aqueous samples were performed according to USEPA 3510C and 3546 methods following quality assurance and quality control procedures. Determination of pesticides in water (ng L-1) extracts were performed by high-performance liquid chromatography coupled with mass spectrometry (HPLC-MS) and by gas chromatography with nitrogen phosphorus (GC-NPD) and electron capture detectors (GC-ECD). The results showed higher frequencies (20- 65%) in surface water samples for Carbendazim (fungicide), Diuron/Tebuthiuron (herbicides) and Fipronil/Imidaclopride (insecticides). The frequency of observations for these pesticides were generally higher in monitoring points located in sugarcane cultivated areas. The following pesticides were most frequently quantified above the Aquatic life benchmarks for freshwater (USEPA Office of Pesticide Programs, 2023) or Brazilian Federal Regulatory Standards (CONAMA Resolution no. 357/2005): Atrazine, Imidaclopride, Carbendazim, 2,4D, Fipronil, and Chlorpiryfos. Higher median concentrations for Diuron and Tebuthiuron in the rainy months (october to march) indicated pesticide transport through surface runoff. However, measurable concentrations in the dry season (april to september) for Fipronil and Imidaclopride also indicates pathways related to subsurface or base flow discharge after pesticide soil infiltration and leaching or dry deposition following pesticide air spraying. With exception to Diuron, no temporal trends related to median concentrations of the most frequently quantified pesticides were observed. These results are important to assist policymakers in the development of strategies aiming at reducing pesticides migration to surface waters from agricultural areas. Further studies will be carried out in selected points to investigate potential risks as a result of pesticides exposure on aquatic biota.Keywords: pesticides monitoring, são paulo state, water quality, surface waters
Procedia PDF Downloads 58194 Decision-Making, Expectations and Life Project in Dependent Adults Due to Disability
Authors: Julia Córdoba
Abstract:
People are not completely autonomous, as we live in society; therefore, people could be defined as relationally dependent. The lack, decrease or loss of physical, psychological and/or social interdependence due to a disability situation is known as dependence. This is related to the need for help from another person in order to carry out activities of daily living. This population group lives with major social limitations that significantly reduce their participation and autonomy. They have high levels of stigma and invisibility from private environments (family and close networks), as well as from the public order (environment, community). The importance of this study lies in the fact that the lack of support and adjustments leads to what authors call the circle of exclusion. This circle describes how not accessing services - due to the difficulties caused by the disability situation impacts biological, social and psychological levels. This situation produces higher levels of exclusion and vulnerability. This study will focus on the process of autonomy and dependence of adults with disability from the model of disability proposed by the International Classification of Functioning, Health and Disability (ICF). The objectives are: i) to write down the relationship between autonomy and dependence based on socio-health variables and ii) to determine the relationship between the situation of autonomy and dependence and the expectations and interests of the participants. We propose a study that will use a survey technique through a previously validated virtual questionnaire. The data obtained will be analyzed using quantitative and qualitative methods for the details of the profiles obtained. No less than 200 questionnaires will be administered to people between 18 and 64 years of age who self-identify as having some degree of dependency due to disability. For the analysis of the results, the two main variables of autonomy and dependence will be considered. Socio-demographic variables such as age, gender identity, area of residence and family composition will be used. In relation to the biological dimension of the situation, the diagnosis, if any, and the type of disability will be asked. For the description of these profiles of autonomy and dependence, the following variables will be used: self-perception, decision-making, interests, expectations and life project, care of their health condition, support and social network, and labor and educational inclusion. The relationship between the target population and the variables collected provides several guidelines that could form the basis for the analysis of other research of interest in terms of self-perception, autonomy and dependence. The areas and situations where people state that they have greater possibilities to decide and have a say will be obtained. It will identify social (networks and support, educational background), demographic (age, gender identity and residence) and health-related variables (diagnosis and type of disability, quality of care) that may have a greater relationship with situations of dependency or autonomy. It will be studied whether the level of autonomy and/or dependence has an impact on the type of expectations and interests of the people surveyed.Keywords: life project, disability, inclusion, autonomy
Procedia PDF Downloads 67193 Functional Ingredients from Potato By-Products: Innovative Biocatalytic Processes
Authors: Salwa Karboune, Amanda Waglay
Abstract:
Recent studies indicate that health-promoting functional ingredients and nutraceuticals can help support and improve the overall public health, which is timely given the aging of the population and the increasing cost of health care. The development of novel ‘natural’ functional ingredients is increasingly challenging. Biocatalysis offers powerful approaches to achieve this goal. Our recent research has been focusing on the development of innovative biocatalytic approaches towards the isolation of protein isolates from potato by-products and the generation of peptides. Potato is a vegetable whose high-quality proteins are underestimated. In addition to their high proportion in the essential amino acids, potato proteins possess angiotensin-converting enzyme-inhibitory potency, an ability to reduce plasma triglycerides associated with a reduced risk of atherosclerosis, and stimulate the release of the appetite regulating hormone CCK. Potato proteins have long been considered not economically feasible due to the low protein content (27% dry matter) found in tuber (Solanum tuberosum). However, potatoes rank the second largest protein supplying crop grown per hectare following wheat. Potato proteins include patatin (40-45 kDa), protease inhibitors (5-25 kDa), and various high MW proteins. Non-destructive techniques for the extraction of proteins from potato pulp and for the generation of peptides are needed in order to minimize functional losses and enhance quality. A promising approach for isolating the potato proteins was developed, which involves the use of multi-enzymatic systems containing selected glycosyl hydrolase enzymes that synergistically work to open the plant cell wall network. This enzymatic approach is advantageous due to: (1) the use of milder reaction conditions, (2) the high selectivity and specificity of enzymes, (3) the low cost and (4) the ability to market natural ingredients. Another major benefit to this enzymatic approach is the elimination of a costly purification step; indeed, these multi-enzymatic systems have the ability to isolate proteins, while fractionating them due to their specificity and selectivity with minimal proteolytic activities. The isolated proteins were used for the enzymatic generation of active peptides. In addition, they were applied into a reduced gluten cookie formulation as consumers are putting a high demand for easy ready to eat snack foods, with high nutritional quality and limited to no gluten incorporation. The addition of potato protein significantly improved the textural hardness of reduced gluten cookies, more comparable to wheat flour alone. The presentation will focus on our recent ‘proof-of principle’ results illustrating the feasibility and the efficiency of new biocatalytic processes for the production of innovative functional food ingredients, from potato by-products, whose potential health benefits are increasingly being recognized.Keywords: biocatalytic approaches, functional ingredients, potato proteins, peptides
Procedia PDF Downloads 379192 The Lived Experience of Pregnant Saudi Women Carrying a Fetus with Structural Abnormalities
Authors: Nasreen Abdulmannan
Abstract:
Fetal abnormalities are categorized as a structural abnormality, non-structural abnormality, or a combination of both. Fetal structural abnormalities (FSA) include, but are not limited, to Down syndrome, congenital diaphragmatic hernia, and cleft lip and palate. These abnormalities can be detected in the first weeks of pregnancy, which is almost around 9 - 20 weeks gestational. Etiological factors for FSA are unknown; however, transmitted genetic risk can be one of these factors. Consanguineous marriage often referred to as inbreeding, represents a significant risk factor for FSA due to the increased likelihood of deleterious genetic traits shared by both biological parents. In a country such as the Kingdom of Saudi Arabia (KSA), consanguineous marriage is high, which creates a significant risk of children being born with congenital abnormalities. Historically, the practice of consanguinity occurred commonly among European royalty. For example, Great Britain’s Queen Victoria married her German first cousin, Prince Albert of Coburg. Although a distant blood relationship, the United Kingdom’s Queen Elizabeth II married her cousin, Prince Philip of Greece and Denmark—both of them direct descendants of Queen Victoria. In Middle Eastern countries, a high incidence of consanguineous unions still exists, including in the KSA. Previous studies indicated that a significant gap exists in understanding the lived experiences of Saudi women dealing with an FSA-complicated pregnancy. Eleven participants were interviewed using a semi-structured interview format for this qualitative phenomenological study investigating the lived experiences of pregnant Saudi women carrying a child with FSA. This study explored the gaps in current literature regarding the lived experiences of pregnant Saudi women whose pregnancies were complicated by FSA. In addition, the researcher acquired knowledge about the available support and resources as well as the Saudi cultural perspective on FSA. This research explored the lived experiences of pregnant Saudi women utilizing Giorgi’s (2009) approach to data collection and data management. Findings for this study cover five major themes: (1) initial maternal reaction to the FSA diagnosis per ultrasound screening; (2) strengthening of the maternal relationship with God; (3) maternal concern for their child’s future; (4) feeling supported by their loved ones; and (5) lack of healthcare provider support and guidance. Future research in the KSA is needed to explore the network support for these mothers. This study recommended further clinical nursing research, nursing education, clinical practice, and healthcare policy/procedures to provide opportunities for improvement in nursing care and increase awareness in KSA society.Keywords: fetal structural abnormalities, psychological distress, health provider, health care
Procedia PDF Downloads 154191 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining
Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj
Abstract:
Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.Keywords: data mining, SME growth, success factors, web mining
Procedia PDF Downloads 266190 The Relationship between Wasting and Stunting in Young Children: A Systematic Review
Authors: Susan Thurstans, Natalie Sessions, Carmel Dolan, Kate Sadler, Bernardette Cichon, Shelia Isanaka, Dominique Roberfroid, Heather Stobagh, Patrick Webb, Tanya Khara
Abstract:
For many years, wasting and stunting have been viewed as separate conditions without clear evidence supporting this distinction. In 2014, the Emergency Nutrition Network (ENN) examined the relationship between wasting and stunting and published a report highlighting the evidence for linkages between the two forms of undernutrition. This systematic review aimed to update the evidence generated since this 2014 report to better understand the implications for improving child nutrition, health and survival. Following PRISMA guidelines, this review was conducted using search terms to describe the relationship between wasting and stunting. Studies related to children under five from low- and middle-income countries that assessed both ponderal growth/wasting and linear growth/stunting, as well as the association between the two, were included. Risk of bias was assessed in all included studies using SIGN checklists. 45 studies met the inclusion criteria- 39 peer reviewed studies, 1 manual chapter, 3 pre-print publications and 2 published reports. The review found that there is a strong association between the two conditions whereby episodes of wasting contribute to stunting and, to a lesser extent, stunting leads to wasting. Possible interconnected physiological processes and common risk factors drive an accumulation of vulnerabilities. Peak incidence of both wasting and stunting was found to be between birth and three months. A significant proportion of children experience concurrent wasting and stunting- Country level data suggests that up to 8% of children under 5 may be both wasted and stunted at the same time, global estimates translate to around 16 million children. Children with concurrent wasting and stunting have an elevated risk of mortality when compared to children with one deficit alone. These children should therefore be considered a high-risk group in the targeting of treatment. Wasting, stunting and concurrent wasting and stunting appear to be more prevalent in boys than girls and it appears that concurrent wasting and stunting peaks between 12- 30 months of age with younger children being the most affected. Seasonal patterns in prevalence of both wasting and stunting are seen in longitudinal and cross sectional data and in particular season of birth has been shown to have an impact on a child’s subsequent experience of wasting and stunting. Evidence suggests that the use of mid-upper-arm circumference combined with weight-for-age Z-score might effectively identify children most at risk of near-term mortality, including those concurrently wasted and stunted. Wasting and stunting frequently occur in the same child, either simultaneously or at different moments through their life course. Evidence suggests there is a process of accumulation of nutritional deficits and therefore risk over the life course of a child demonstrates the need for a more integrated approach to prevention and treatment strategies to interrupt this process. To achieve this, undernutrition policies, programmes, financing and research must become more unified.Keywords: Concurrent wasting and stunting, Review, Risk factors, Undernutrition
Procedia PDF Downloads 125189 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology
Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik
Abstract:
Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms
Procedia PDF Downloads 77188 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 183187 European Commission Radioactivity Environmental Monitoring Database REMdb: A Law (Art. 36 Euratom Treaty) Transformed in Environmental Science Opportunities
Authors: M. Marín-Ferrer, M. A. Hernández, T. Tollefsen, S. Vanzo, E. Nweke, P. V. Tognoli, M. De Cort
Abstract:
Under the terms of Article 36 of the Euratom Treaty, European Union Member States (MSs) shall periodically communicate to the European Commission (EC) information on environmental radioactivity levels. Compilations of the information received have been published by the EC as a series of reports beginning in the early 1960s. The environmental radioactivity results received from the MSs have been introduced into the Radioactivity Environmental Monitoring database (REMdb) of the Institute for Transuranium Elements of the EC Joint Research Centre (JRC) sited in Ispra (Italy) as part of its Directorate General for Energy (DG ENER) support programme. The REMdb brings to the scientific community dealing with environmental radioactivity topics endless of research opportunities to exploit the near 200 millions of records received from MSs containing information of radioactivity levels in milk, water, air and mixed diet. The REM action was created shortly after Chernobyl crisis to support the EC in its responsibilities in providing qualified information to the European Parliament and the MSs on the levels of radioactive contamination of the various compartments of the environment (air, water, soil). Hence, the main line of REM’s activities concerns the improvement of procedures for the collection of environmental radioactivity concentrations for routine and emergency conditions, as well as making this information available to the general public. In this way, REM ensures the availability of tools for the inter-communication and access of users from the Member States and the other European countries to this information. Specific attention is given to further integrate the new MSs with the existing information exchange systems and to assist Candidate Countries in fulfilling these obligations in view of their membership of the EU. Article 36 of the EURATOM treaty requires the competent authorities of each MS to provide regularly the environmental radioactivity monitoring data resulting from their Article 35 obligations to the EC in order to keep EC informed on the levels of radioactivity in the environment (air, water, milk and mixed diet) which could affect population. The REMdb has mainly two objectives: to keep a historical record of the radiological accidents for further scientific study, and to collect the environmental radioactivity data gathered through the national environmental monitoring programs of the MSs to prepare the comprehensive annual monitoring reports (MR). The JRC continues his activity of collecting, assembling, analyzing and providing this information to public and MSs even during emergency situations. In addition, there is a growing concern with the general public about the radioactivity levels in the terrestrial and marine environment, as well about the potential risk of future nuclear accidents. To this context, a clear and transparent communication with the public is needed. EURDEP (European Radiological Data Exchange Platform) is both a standard format for radiological data and a network for the exchange of automatic monitoring data. The latest release of the format is version 2.0, which is in use since the beginning of 2002.Keywords: environmental radioactivity, Euratom, monitoring report, REMdb
Procedia PDF Downloads 443186 Evaluating the Business Improvement District Redevelopment Model: An Ethnography of a Tokyo Shopping Mall
Authors: Stefan Fuchs
Abstract:
Against the backdrop of the proliferation of shopping malls in Japan during the last two decades, this paper presents the results of an ethnography conducted at a recently built suburban shopping mall in Western Tokyo. Through the analysis of the lived experiences of local residents, mall customers and the mall management this paper evaluates the benefits and disadvantages of the Business Improvement District (BID) model, which was implemented as urban redevelopment strategy in the area surrounding the shopping mall. The results of this research project show that while the BID model has in some respects contributed to the economic prosperity and to the perceived convenience of the area, it has led to gentrification and the redevelopment shows some deficiencies with regard to the inclusion of the elderly population as well as to the democratization of the decision-making process within the area. In Japan, shopping malls have been steadily growing both in size and number since a series of deregulation policies was introduced in the year 2000 in an attempt to push the domestic economy and to rejuvenate urban landscapes. Shopping malls have thereby become defining spaces of the built environment and are arguably important places of social interaction. Notwithstanding the vital role they play as factors of urban transformation, they have been somewhat overlooked in the research on Japan; especially with respect to their meaning for people’s everyday lives. By examining the ways, people make use of space in a shopping mall the research project presented in this paper addresses this gap in the research. Moreover, the research site of this research project is one of the few BIDs of Japan and the results presented in this paper can give indication on the scope of the future applicability of this urban redevelopment model. The data presented in this research was collected during a nine-months ethnographic fieldwork in and around the shopping mall. This ethnography includes semi-structured interviews with ten key informants as well as direct and participant observations examining the lived experiences and perceptions of people living, shopping or working at the shopping mall. The analysis of the collected data focused on recurring themes aiming at ultimately capturing different perspectives on the same aspects. In this manner, the research project documents the social agency of different groups within one communal network. The analysis of the perceptions towards the urban redevelopment around the shopping mall has shown that mainly the mall customers and large businesses benefit from the BID redevelopment model. While local residents benefit to some extent from their neighbourhood becoming more convenient for shopping they perceive themselves as being disadvantaged by changing demographics due to rising living expenses, the general noise level and the prioritisation of a certain customer segment or age group at the shopping mall. Although the shopping mall examined in this research project is just an example, the findings suggest that in future urban redevelopment politics have to provide incentives for landowners and developing companies to think of other ways of transforming underdeveloped areas.Keywords: business improvement district, ethnography, shopping mall, urban redevelopment
Procedia PDF Downloads 136