Search results for: disaster scenarios
151 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management
Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities
Procedia PDF Downloads 72150 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 76149 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 110148 An Artificially Intelligent Teaching-Agent to Enhance Learning Interactions in Virtual Settings
Authors: Abdulwakeel B. Raji
Abstract:
This paper introduces a concept of an intelligent virtual learning environment that involves communication between learners and an artificially intelligent teaching agent in an attempt to replicate classroom learning interactions. The benefits of this technology over current e-learning practices is that it creates a virtual classroom where real time adaptive learning interactions are made possible. This is a move away from the static learning practices currently being adopted by e-learning systems. Over the years, artificial intelligence has been applied to various fields, including and not limited to medicine, military applications, psychology, marketing etc. The purpose of e-learning applications is to ensure users are able to learn outside of the classroom, but a major limitation has been the inability to fully replicate classroom interactions between teacher and students. This study used comparative surveys to gain information and understanding of the current learning practices in Nigerian universities and how they compare to these practices compare to the use of a developed e-learning system. The study was conducted by attending several lectures and noting the interactions between lecturers and tutors and as an aftermath, a software has been developed that deploys the use of an artificial intelligent teaching-agent alongside an e-learning system to enhance user learning experience and attempt to create the similar learning interactions to those found in classroom and lecture hall settings. Dialogflow has been used to implement a teaching-agent, which has been developed using JSON, which serves as a virtual teacher. Course content has been created using HTML, CSS, PHP and JAVASCRIPT as a web-based application. This technology can run on handheld devices and Google based home technologies to give learners an access to the teaching agent at any time. This technology also implements the use of definite clause grammars and natural language processing to match user inputs and requests with defined rules to replicate learning interactions. This technology developed covers familiar classroom scenarios such as answering users’ questions, asking ‘do you understand’ at regular intervals and answering subsequent requests, taking advanced user queries to give feedbacks at other periods. This software technology uses deep learning techniques to learn user interactions and patterns to subsequently enhance user learning experience. A system testing has been undergone by undergraduate students in the UK and Nigeria on the course ‘Introduction to Database Development’. Test results and feedback from users shows that this study and developed software is a significant improvement on existing e-learning systems. Further experiments are to be run using the software with different students and more course contents.Keywords: virtual learning, natural language processing, definite clause grammars, deep learning, artificial intelligence
Procedia PDF Downloads 135147 A Geospatial Approach to Coastal Vulnerability Using Satellite Imagery and Coastal Vulnerability Index: A Case Study Mauritius
Authors: Manta Nowbuth, Marie Anais Kimberley Therese
Abstract:
The vulnerability of coastal areas to storm surges stands as a critical global concern. The increasing frequency and intensity of extreme weather events have increased the risks faced by communities living along the coastlines Worldwide. Small Island developing states (SIDS) stands out as being exceptionally vulnerable, coastal regions, ecosystems of human habitation and natural forces, bear witness to the frontlines of climate-induced challenges, and the intensification of storm surges underscores the urgent need for a comprehensive understanding of coastal vulnerability. With limited landmass, low-lying terrains, and resilience on coastal resources, SIDS face an amplified vulnerability to the consequences of storm surges, the delicate balance between human activities and environmental dynamics in these island nations increases the urgency of tailored strategies for assessing and mitigating coastal vulnerability. This research uses an approach to evaluate the vulnerability of coastal communities in Mauritius. The Satellite imagery analysis makes use of sentinel satellite imageries, modified normalised difference water index, classification techniques and the DSAS add on to quantify the extent of shoreline erosion or accumulation, providing a spatial perspective on coastal vulnerability. The coastal Vulnerability Index (CVI) is applied by Gonitz et al Formula, this index considers factors such as coastal slope, sea level rise, mean significant wave height, and tidal range. Weighted assessments identify regions with varying levels of vulnerability, ranging from low to high. The study was carried out in a Village Located in the south of Mauritius, namely Rivière des Galets, with a population of about 500 people over an area of 60,000m². The Village of Rivière des Galets being located in the south, and the southern coast of Mauritius being exposed to the open Indian ocean, is vulnerable to swells, The swells generated by the South east trade winds can lead to large waves and rough sea conditions along the Southern Coastline which has an impact on the coastal activities, including fishing, tourism and coastal Infrastructures, hence, On the one hand, the results highlighted that from a stretch of 123km of coastline the linear rate regression for the 5 –year span varies from-24.1m/yr. to 8.2m/yr., the maximum rate of change in terms of eroded land is -24m/yr. and the maximum rate of accretion is 8.2m/yr. On the other hand, the coastal vulnerability index varies from 9.1 to 45.6 and it was categorised into low, moderate, high and very high risks zones. It has been observed that region which lacks protective barriers and are made of sandy beaches are categorised as high risks zone and hence it is imperative to high risk regions for immediate attention and intervention, as they will most likely be exposed to coastal hazards and impacts from climate change, which demands proactive measures for enhanced resilience and sustainable adaptation strategies.Keywords: climate change, coastal vulnerability, disaster management, remote sensing, satellite imagery, storm surge
Procedia PDF Downloads 9146 An Observation Approach of Reading Order for Single Column and Two Column Layout Template
Authors: In-Tsang Lin, Chiching Wei
Abstract:
Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.Keywords: document processing, reading order, observation method, layout recognition
Procedia PDF Downloads 181145 Damage-Based Seismic Design and Evaluation of Reinforced Concrete Bridges
Authors: Ping-Hsiung Wang, Kuo-Chun Chang
Abstract:
There has been a common trend worldwide in the seismic design and evaluation of bridges towards the performance-based method where the lateral displacement or the displacement ductility of bridge column is regarded as an important indicator for performance assessment. However, the seismic response of a bridge to an earthquake is a combined result of cyclic displacements and accumulated energy dissipation, causing damage to the bridge, and hence the lateral displacement (ductility) alone is insufficient to tell its actual seismic performance. This study aims to propose a damage-based seismic design and evaluation method for reinforced concrete bridges on the basis of the newly developed capacity-based inelastic displacement spectra. The capacity-based inelastic displacement spectra that comprise an inelastic displacement ratio spectrum and a corresponding damage state spectrum was constructed by using a series of nonlinear time history analyses and a versatile, smooth hysteresis model. The smooth model could take into account the effects of various design parameters of RC bridge columns and correlates the column’s strength deterioration with the Park and Ang’s damage index. It was proved that the damage index not only can be used to accurately predict the onset of strength deterioration, but also can be a good indicator for assessing the actual visible damage condition of column regardless of its loading history (i.e., similar damage index corresponds to similar actual damage condition for the same designed columns subjected to very different cyclic loading protocols as well as earthquake loading), providing a better insight into the seismic performance of bridges. Besides, the computed spectra show that the inelastic displacement ratio for far-field ground motions approximately conforms to the equal displacement rule when structural period is larger than around 0.8 s, but that for near-fault ground motions departs from the rule in the whole considered spectral regions. Furthermore, the near-fault ground motions would lead to significantly greater inelastic displacement ratio and damage index than far-field ground motions and most of the practical design scenarios cannot survive the considered near-fault ground motion when the strength reduction factor of bridge is not less than 5.0. Finally, the spectrum formula is presented as a function of structural period, strength reduction factor, and various column design parameters for far-field and near-fault ground motions by means of the regression analysis of the computed spectra. And based on the developed spectrum formula, a design example of a bridge is presented to illustrate the proposed damage-based seismic design and evaluation method where the damage state of the bridge is used as the performance objective.Keywords: damage index, far-field, near-fault, reinforced concrete bridge, seismic design and evaluation
Procedia PDF Downloads 125144 A Bayesian Approach for Health Workforce Planning in Portugal
Authors: Diana F. Lopes, Jorge Simoes, José Martins, Eduardo Castro
Abstract:
Health professionals are the keystone of any health system, by delivering health services to the population. Given the time and cost involved in training new health professionals, the planning process of the health workforce is particularly important as it ensures a proper balance between the supply and demand of these professionals and it plays a central role on the Health 2020 policy. In the past 40 years, the planning of the health workforce in Portugal has been conducted in a reactive way lacking a prospective vision based on an integrated, comprehensive and valid analysis. This situation may compromise not only the productivity and the overall socio-economic development but the quality of the healthcare services delivered to patients. This is even more critical given the expected shortage of the health workforce in the future. Furthermore, Portugal is facing an aging context of some professional classes (physicians and nurses). In 2015, 54% of physicians in Portugal were over 50 years old, and 30% of all members were over 60 years old. This phenomenon associated to an increasing emigration of young health professionals and a change in the citizens’ illness profiles and expectations must be considered when planning resources in healthcare. The perspective of sudden retirement of large groups of professionals in a short time is also a major problem to address. Another challenge to embrace is the health workforce imbalances, in which Portugal has one of the lowest nurse to physician ratio, 1.5, below the European Region and the OECD averages (2.2 and 2.8, respectively). Within the scope of the HEALTH 2040 project – which aims to estimate the ‘Future needs of human health resources in Portugal till 2040’ – the present study intends to get a comprehensive dynamic approach of the problem, by (i) estimating the needs of physicians and nurses in Portugal, by specialties and by quinquenium till 2040; (ii) identifying the training needs of physicians and nurses, in medium and long term, till 2040, and (iii) estimating the number of students that must be admitted into medicine and nursing training systems, each year, considering the different categories of specialties. The development of such approach is significantly more critical in the context of limited budget resources and changing health care needs. In this context, this study presents the drivers of the healthcare needs’ evolution (such as the demographic and technological evolution, the future expectations of the users of the health systems) and it proposes a Bayesian methodology, combining the best available data with experts opinion, to model such evolution. Preliminary results considering different plausible scenarios are presented. The proposed methodology will be integrated in a user-friendly decision support system so it can be used by politicians, with the potential to measure the impact of health policies, both at the regional and the national level.Keywords: bayesian estimation, health economics, health workforce planning, human health resources planning
Procedia PDF Downloads 252143 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 89142 Simulating an Interprofessional Hospital Day Shift: A Student Interprofessional (IP) Collaborative Learning Activity
Authors: Fiona Jensen, Barb Goodwin, Nancy Kleiman, Rhonda Usunier
Abstract:
Background: Clinical simulation is now a common component in many health profession curricula in preparation for clinical practice. In the Rady Faculty of Health Sciences (RFHS) college leads in simulation and interprofessional (IP) education, planned an eight hour simulated hospital day shift, where seventy students from six health professions across two campuses, learned with each other in a safe, realistic environment. Learning about interprofessional collaboration, an expected competency for many health professions upon graduation, was a primary focus of the simulation event. Method: Faculty representatives from the Colleges of Nursing, Medicine, Pharmacy and Rehabilitation Sciences (Physical Therapy, Occupation Therapy, Respiratory Therapy) and Pharmacy worked together to plan the IP event in a simulation facility in the College of Nursing. Each college provided a faculty mentor to guide the same profession students. Students were placed in interprofessional teams consisting of a nurse, physician, pharmacist, and then sharing respiratory, occupational, and physical therapists across the team depending on the needs of the patients. Eight patient scenarios were role played by health profession students, who had been provided with their patient’s story shortly before the event. Each team was guided by a facilitator. Results and Outcomes: On the morning of the event, all students gathered in a large group to meet mentors and facilitators and have a brief overview of the six competencies for effective collaboration and the session objectives. The students assuming their same profession roles were provided with their patient’s chart at the beginning of the shift, met with their team, and then completed professional specific assessments. Shortly into the shift, IP team rounds began, facilitated by the team facilitator. During the shift, each patient role-played a spontaneous health incident, which required collaboration between the IP team members for assessment and management. The afternoon concluded with team rounds, a collaborative management plan, and a facilitated de-brief. Conclusions: During the de-brief sessions, students responded to set questions related to the session learning objectives and expressed many positive learning moments. We believe that we have a sustainable simulation IP collaborative learning opportunity, which can be embedded into curricula, and has the capacity to grow to include more health profession faculties and students. Opportunities are being explored in the RFHS at the administrative level, to offer this event more frequently in the academic year to reach more students. In addition, a formally structured event evaluation tool would provide important feedback and inform the qualitative feedback to event organizers and the colleges about the significance of the simulation event to student learning.Keywords: simulation, collaboration, teams, interprofessional
Procedia PDF Downloads 131141 Competitive Effects of Differential Voting Rights and Promoter Control in Indian Start-Ups
Authors: Prateek Bhattacharya
Abstract:
The definition of 'control' in India is a rapidly evolving concept, owing to varying rights attached to varying securities. Shares with differential voting rights (DVRs) provide the holder with differential rights as to voting, as compared to ordinary equity shareholders of the company. Such DVRs can amount to both superior voting rights and inferior voting rights, where DVRs with superior voting rights amount to providing the holder with golden shares in the company. While DVRs are not a novel concept in India having been recognized since 2000, they were placed on a back burner by the Securities and Exchange Board of India (SEBI) in 2010 after issuance of DVRs with superior voting rights was restricted. In June 2019, the SEBI rekindled the ebbing fire of DVRs, keeping mind the fast-paced nature of the global economy, the government's faith that India’s ‘new age technology companies’ (i.e., Start-Ups) will lead the charge in achieving its goal of India becoming a $5 trillion dollar economy by 2024, and recognizing that the promoters of such Start-Ups seek to raise capital without losing control over their companies. DVRs with superior voting rights guarantee promoters with up to 74% shareholding in Start-Ups for a period of 5 years, meaning that the holder of such DVRs can exercise sole control and material influence over the company for that period. This manner of control has the potential of causing both pro-competitive and anti-competitive effects in the markets where these companies operate. On the one hand, DVRs will allow Start-Up promoters/founders to retain control of their companies and protect its business interests from foreign elements such as private/public investors – in a scenario where such investors have multiple investments in firms engaged in associated lines of business (whether on a horizontal or vertical level) and would seek to influence these firms to enter into potential anti-competitive arrangements with one another, DVRs will enable the promoters to thwart such scenarios. On the other hand, promoters/founders who themselves have multiple investments in Start-Ups, which are in associated lines of business run the risk of influencing these associated Start-Ups to engage in potentially anti-competitive arrangements in the name of profit maximisation. This paper shall be divided into three parts: Part I shall deal with the concept of ‘control’, as deliberated upon and decided by the SEBI and the Competition Commission of India (CCI) under both company/securities law and competition law; Part II shall review this definition of ‘control’ through the lens of DVRs, and Part III shall discuss the aforementioned potential pro-competitive and anti-competitive effects caused by the DVRs by examining the current Indian Start-Up scenario. The paper shall conclude by providing suggestions for the CCI to incorporate a clearer and more progressive concept of ‘control’.Keywords: competition law, competitive effects, control, differential voting rights, DVRs, investor shareholding, merger control, start-ups
Procedia PDF Downloads 123140 Coastal Vulnerability Index and Its Projection for Odisha Coast, East Coast of India
Authors: Bishnupriya Sahoo, Prasad K. Bhaskaran
Abstract:
Tropical cyclone is one among the worst natural hazards that results in a trail of destruction causing enormous damage to life, property, and coastal infrastructures. In a global perspective, the Indian Ocean is considered as one of the cyclone prone basins in the world. Specifically, the frequency of cyclogenesis in the Bay of Bengal is higher compared to the Arabian Sea. Out of the four maritime states in the East coast of India, Odisha is highly susceptible to tropical cyclone landfall. Historical records clearly decipher the fact that the frequency of cyclones have reduced in this basin. However, in the recent decades, the intensity and size of tropical cyclones have increased. This is a matter of concern as the risk and vulnerability level of Odisha coast exposed to high wind speed and gusts during cyclone landfall have increased. In this context, there is a need to assess and evaluate the severity of coastal risk, area of exposure under risk, and associated vulnerability with a higher dimension in a multi-risk perspective. Changing climate can result in the emergence of a new hazard and vulnerability over a region with differential spatial and socio-economic impact. Hence there is a need to have coastal vulnerability projections in a changing climate scenario. With this motivation, the present study attempts to estimate the destructiveness of tropical cyclones based on Power Dissipation Index (PDI) for those cyclones that made landfall along Odisha coast that exhibits an increasing trend based on historical data. The study also covers the futuristic scenarios of integral coastal vulnerability based on the trends in PDI for the Odisha coast. This study considers 11 essential and important parameters; the cyclone intensity, storm surge, onshore inundation, mean tidal range, continental shelf slope, topo-graphic elevation onshore, rate of shoreline change, maximum wave height, relative sea level rise, rainfall distribution, and coastal geomorphology. The study signifies that over a decadal scale, the coastal vulnerability index (CVI) depends largely on the incremental change in variables such as cyclone intensity, storm surge, and associated inundation. In addition, the study also performs a critical analysis on the modulation of PDI on storm surge and inundation characteristics for the entire coastal belt of Odisha State. Interestingly, the study brings to light that a linear correlation exists between the storm-tide with PDI. The trend analysis of PDI and its projection for coastal Odisha have direct practical applications in effective coastal zone management and vulnerability assessment.Keywords: Bay of Bengal, coastal vulnerability index, power dissipation index, tropical cyclone
Procedia PDF Downloads 237139 Comprehensive, Up-to-Date Climate System Change Indicators, Trends and Interactions
Authors: Peter Carter
Abstract:
Comprehensive climate change indicators and trends inform the state of the climate (system) with respect to present and future climate change scenarios and the urgency of mitigation and adaptation. With data records now going back for many decades, indicator trends can complement model projections. They are provided as datasets by several climate monitoring centers, reviewed by state of the climate reports, and documented by the IPCC assessments. Up-to-date indicators are provided here. Rates of change are instructive, as are extremes. The indicators include greenhouse gas (GHG) emissions (natural and synthetic), cumulative CO2 emissions, atmospheric GHG concentrations (including CO2 equivalent), stratospheric ozone, surface ozone, radiative forcing, global average temperature increase, land temperature increase, zonal temperature increases, carbon sinks, soil moisture, sea surface temperature, ocean heat content, ocean acidification, ocean oxygen, glacier mass, Arctic temperature, Arctic sea ice (extent and volume), northern hemisphere snow cover, permafrost indices, Arctic GHG emissions, ice sheet mass, sea level rise, and stratospheric and surface ozone. Global warming is not the most reliable single metric for the climate state. Radiative forcing, atmospheric CO2 equivalent, and ocean heat content are more reliable. Global warming does not provide future commitment, whereas atmospheric CO2 equivalent does. Cumulative carbon is used for estimating carbon budgets. The forcing of aerosols is briefly addressed. Indicator interactions are included. In particular, indicators can provide insight into several crucial global warming amplifying feedback loops, which are explained. All indicators are increasing (adversely), most as fast as ever and some faster. One particularly pressing indicator is rapidly increasing global atmospheric methane. In this respect, methane emissions and sources are covered in more detail. In their application, indicators used in assessing safe planetary boundaries are included. Indicators are considered with respect to recent published papers on possible catastrophic climate change and climate system tipping thresholds. They are climate-change-policy relevant. In particular, relevant policies include the 2015 Paris Agreement on “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels” and the 1992 UN Framework Convention on Climate change, which has “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”Keywords: climate change, climate change indicators, climate change trends, climate system change interactions
Procedia PDF Downloads 105138 Troubling Depictions of Gambian Womanhood in Dayo Forster’s Reading the Ceiling
Authors: A. Wolfe
Abstract:
Dayo Forster’s impressively crafted Reading the Ceiling (2007) enjoys a relatively high profile among Western readers. It is one of only a handful of Gambian novels to be published by an international publisher, Simon and Schuster of London, and was subsequently shortlisted for the Commonwealth Writer’s Best First Book Prize in 2008. It is currently available to US readers in print and as an e-book and has 167 ratings on Goodreads. This paper addresses the possible influence of the book on Western readers’ perception of The Gambia, or Africa in general, through its depiction of the conditions of Gambian women’s lives. Through a close reading of passages and analysis of imagery, intertextuality, and characterization in the book, the paper demonstrates that Forster portrays the culture of The Gambia as oppressively patriarchal and the prospects for young girls who stay in the country as extremely limited. Reading the Ceiling starts on Ayodele’s 18th birthday, the day she has planned to have sex for the first time. Most of the rest of the book is divided into three parts, each following the chain of events that occur after sex with a potential partner. Although Ayodele goes abroad for her education in each of the three scenarios, she ultimately capitulates to the patriarchal politics and demands of marriage and childrearing in The Gambia, settling for relationships with men she does not love, cooking and cleaning for husbands and children, and silencing her own opinions and desires in exchange for the familiar traditions of patriarchal—and, in one case, polygamous—marriage. Each scenario ends with resignation to death, as, after her mother’s funeral, Ayodele admits to herself that she will be next. Forster uses dust and mud imagery throughout the novel to indicate the dinginess of Ayodele’s life as a young woman, and then wife and mother, in The Gambia as well as the inescapability of this life. This depiction of earthen material is also present in the novel’s recounting of an oral tale about a mermaid captured by fishermen, a story that mirrors Ayodele’s ensnarement by traditional marriage customs and gender norms. A review of the fate of other characters in the novel reveals that Ayodele is not the only woman who becomes trapped by the expectations for women in The Gambia, as those who stay in the country end up subservient to their husbands and/or victims of men’s habitual infidelity. It is important to note that Reading the Ceiling is focused on the experiences of a minority—The Gambia’s middle class, Christian urban dwellers with money for education. Regardless of its limited scope, the novel clearly depicts The Gambia as a place where women are simply unable to successfully contend against traditional patriarchal norms. Although this novel evokes vivid imagery of The Gambia through original and compelling descriptions of food preparation, clothing, and scenery, it perhaps does little to challenge stereotypical perceptions of the lives of African women among a Western readership.Keywords: African literature, commonwealth literature, marriage, stereotypes, women
Procedia PDF Downloads 172137 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems
Authors: A. G. Akhundov
Abstract:
Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning
Procedia PDF Downloads 189136 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 286135 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 75134 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model
Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle
Abstract:
In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model
Procedia PDF Downloads 103133 Application of Industrial Ecology to the INSPIRA Zone: Territory Planification and New Activities
Authors: Mary Hanhoun, Jilla Bamarni, Anne-Sophie Bougard
Abstract:
INSPIR’ECO is a 18-month research and innovation project that aims to specify and develop a tool to offer new services for industrials and territorial planners/managers based on Industrial Ecology Principles. This project is carried out on the territory of Salaise Sablons and the services are designed to be deployed on other territories. Salaise-Sablons area is located in the limit of 5 departments on a major European economic axis multimodal traffic (river, rail and road). The perimeter of 330 ha includes 90 hectares occupied by 20 companies, with a total of 900 jobs, and represents a significant potential basin of development. The project involves five multi-disciplinary partners (Syndicat Mixte INSPIRA, ENGIE, IDEEL, IDEAs Laboratory and TREDI). INSPIR’ECO project is based on the principles that local stakeholders need services to pool, share their activities/equipment/purchases/materials. These services aims to : 1. initiate and promote exchanges between existing companies and 2. identify synergies between pre-existing industries and future companies that could be implemented in INSPIRA. These eco-industrial synergies can be related to: the recovery / exchange of industrial flows (industrial wastewater, waste, by-products, etc.); the pooling of business services (collective waste management, stormwater collection and reuse, transport, etc.); the sharing of equipments (boiler, steam production, wastewater treatment unit, etc.) or resources (splitting jobs cost, etc.); and the creation of new activities (interface activities necessary for by-product recovery, development of products or services from a newly identified resource, etc.). These services are based on IT tool used by the interested local stakeholders that intends to allow local stakeholders to take decisions. Thus, this IT tool: - include an economic and environmental assessment of each implantation or pooling/sharing scenarios for existing or further industries; - is meant for industrial and territorial manager/planners - is designed to be used for each new industrial project. - The specification of the IT tool is made through an agile process all along INSPIR’ECO project fed with: - Users expectations thanks to workshop sessions where mock-up interfaces are displayed; - Data availability based on local and industrial data inventory. These input allow to specify the tool not only with technical and methodological constraints (notably the ones from economic and environmental assessments) but also with data availability and users expectations. A feedback on innovative resource management initiatives in port areas has been realized in the beginning of the project to feed the designing services step.Keywords: development opportunities, INSPIR’ECO, INSPIRA, industrial ecology, planification, synergy identification
Procedia PDF Downloads 163132 Challenges for Reconstruction: A Case Study from 2015 Gorkha, Nepal Earthquake
Authors: Hari K. Adhikari, Keshab Sharma, K. C. Apil
Abstract:
The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 hit the central region of Nepal on April 25, 2015; with the epicenter about 77 km northwest of Kathmandu Valley. This paper aims to explore challenges of reconstruction in the rural earthquake-stricken areas of Nepal. The Gorkha earthquake on April 25, 2015, has significantly affected the livelihood of people and overall economy in Nepal, causing severe damage and destruction in central Nepal including nation’s capital. A larger part of the earthquake affected area is difficult to access with rugged terrain and scattered settlements, which posed unique challenges and efforts on a massive scale reconstruction and rehabilitation. 800 thousand buildings were affected leaving 8 million people homeless. Challenge of reconstruction of optimum 800 thousand houses is arduous for Nepal in the background of its turmoil political scenario and weak governance. With significant actors involved in the reconstruction process, no appreciable relief has reached to the ground, which is reflected over the frustration of affected people. The 2015 Gorkha earthquake is one of most devastating disasters in the modern history of Nepal. Best of our knowledge, there is no comprehensive study on reconstruction after disasters in modern Nepal, which integrates the necessary information to deal with challenges and opportunities of reconstructions. The study was conducted using qualitative content analysis method. Thirty engineers and ten social mobilizes working for reconstruction and more than hundreds local social workers, local party leaders, and earthquake victims were selected arbitrarily. Information was collected through semi-structured interviews and open-ended questions, focus group discussions, and field notes, with no previous assumption. Author also reviewed literature and document reviews covering academic and practitioner studies on challenges of reconstruction after earthquake in developing countries such as 2001 Gujarat earthquake, 2005 Kashmir earthquake, 2003 Bam earthquake and 2010 Haiti earthquake; which have very similar building typologies, economic, political, geographical, and geological conditions with Nepal. Secondary data was collected from reports, action plans, and reflection papers of governmental entities, non-governmental organizations, private sector businesses, and the online news. This study concludes that inaccessibility, absence of local government, weak governance, weak infrastructures, lack of preparedness, knowledge gap and manpower shortage, etc. are the key challenges of the reconstruction after 2015 earthquake in Nepal. After scrutinizing different challenges and issues, study counsels that good governance, integrated information, addressing technical issues, public participation along with short term and long term strategies to tackle with technical issues are some crucial factors for timely and quality reconstruction in context of Nepal. Sample collected for this study is relatively small sample size and may not be fully representative of the stakeholders involved in reconstruction. However, the key findings of this study are ones that need to be recognized by academics, governments, and implementation agencies, and considered in the implementation of post-disaster reconstruction program in developing countries.Keywords: Gorkha earthquake, reconstruction, challenges, policy
Procedia PDF Downloads 409131 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India
Authors: Rituparna Pal, Faiz Ahmed
Abstract:
Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters
Procedia PDF Downloads 176130 Physical Aspects of Shape Memory and Reversibility in Shape Memory Alloys
Authors: Osman Adiguzel
Abstract:
Shape memory alloys take place in a class of smart materials by exhibiting a peculiar property called the shape memory effect. This property is characterized by the recoverability of two certain shapes of material at different temperatures. These materials are often called smart materials due to their functionality and their capacity of responding to changes in the environment. Shape memory materials are used as shape memory devices in many interdisciplinary fields such as medicine, bioengineering, metallurgy, building industry and many engineering fields. The shape memory effect is performed thermally by heating and cooling after first cooling and stressing treatments, and this behavior is called thermoelasticity. This effect is based on martensitic transformations characterized by changes in the crystal structure of the material. The shape memory effect is the result of successive thermally and stress-induced martensitic transformations. Shape memory alloys exhibit thermoelasticity and superelasticity by means of deformation in the low-temperature product phase and high-temperature parent phase region, respectively. Superelasticity is performed by stressing and releasing the material in the parent phase region. Loading and unloading paths are different in the stress-strain diagram, and the cycling loop reveals energy dissipation. The strain energy is stored after releasing, and these alloys are mainly used as deformation absorbent materials in control of civil structures subjected to seismic events, due to the absorbance of strain energy during any disaster or earthquake. Thermal-induced martensitic transformation occurs thermally on cooling, along with lattice twinning with cooperative movements of atoms by means of lattice invariant shears, and ordered parent phase structures turn into twinned martensite structures, and twinned structures turn into the detwinned structures by means of stress-induced martensitic transformation by stressing the material in the martensitic condition. Thermal induced transformation occurs with the cooperative movements of atoms in two opposite directions, <110 > -type directions on the {110} - type planes of austenite matrix which is the basal plane of martensite. Copper-based alloys exhibit this property in the metastable β-phase region, which has bcc-based structures at high-temperature parent phase field. Lattice invariant shear and twinning is not uniform in copper-based ternary alloys and gives rise to the formation of complex layered structures, depending on the stacking sequences on the close-packed planes of the ordered parent phase lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper-based CuAlMn and CuZnAl alloys. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit superlattice reflections inherited from the parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that diffraction angles and intensities of diffraction peaks change with the aging duration at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close to each other. This result refers to the rearrangement of atoms in a diffusive manner.Keywords: shape memory effect, martensitic transformation, reversibility, superelasticity, twinning, detwinning
Procedia PDF Downloads 181129 Mitigating Urban Flooding through Spatial Planning Interventions: A Case of Bhopal City
Authors: Rama Umesh Pandey, Jyoti Yadav
Abstract:
Flooding is one of the waterborne disasters that causes extensive destruction in urban areas. Developing countries are at a higher risk of such damage and more than half of the global flooding events take place in Asian countries including India. Urban flooding is more of a human-induced disaster rather than natural. This is highly influenced by the anthropogenic factors, besides metrological and hydrological causes. Unplanned urbanization and poor management of cities enhance the impact manifold and cause huge loss of life and property in urban areas. It is an irony that urban areas have been facing water scarcity in summers and flooding during monsoon. This paper is an attempt to highlight the factors responsible for flooding in a city especially from an urban planning perspective and to suggest mitigating measures through spatial planning interventions. Analysis has been done in two stages; first is to assess the impacts of previous flooding events and second to analyze the factors responsible for flooding at macro and micro level in cities. Bhopal, a city in Central India having nearly two million population, has been selected for the study. The city has been experiencing flooding during heavy rains in monsoon. The factors responsible for urban flooding were identified through literature review as well as various case studies from different cities across the world and India. The factors thus identified were analyzed for both macro and micro level influences. For macro level, the previous flooding events that have caused huge destructions were analyzed and the most affected areas in Bhopal city were identified. Since the identified area was falling within the catchment of a drain so the catchment area was delineated for the study. The factors analyzed were: rainfall pattern to calculate the return period using Weibull’s formula; imperviousness through mapping in ArcGIS; runoff discharge by using Rational method. The catchment was divided into micro watersheds and the micro watershed having maximum impervious surfaces was selected to analyze the coverage and effect of physical infrastructure such as: storm water management; sewerage system; solid waste management practices. The area was further analyzed to assess the extent of violation of ‘building byelaws’ and ‘development control regulations’ and encroachment over the natural water streams. Through analysis, the study has revealed that the main issues have been: lack of sewerage system; inadequate storm water drains; inefficient solid waste management in the study area; violation of building byelaws through extending building structures ether on to the drain or on the road; encroachments by slum dwellers along or on to the drain reducing the width and capacity of the drain. Other factors include faulty culvert’s design resulting in back water effect. Roads are at higher level than the plinth of houses which creates submersion of their ground floors. The study recommends spatial planning interventions for mitigating urban flooding and strategies for management of excess rain water during monsoon season. Recommendations have also been made for efficient land use management to mitigate water logging in areas vulnerable to flooding.Keywords: mitigating strategies, spatial planning interventions, urban flooding, violation of development control regulations
Procedia PDF Downloads 329128 Training for Search and Rescue Teams: Online Training for SAR Teams to Locate Lost Persons with Dementia Using Drones
Authors: Dalia Hanna, Alexander Ferworn
Abstract:
This research provides detailed proposed training modules for the public safety teams and, specifically, SAR teams responsible for search and rescue operations related to finding lost persons with dementia. Finding a lost person alive is the goal of this training. Time matters if a lost person is to be found alive. Finding lost people living with dementia is quite challenging, as they are unaware they are lost and will not seek help. Even a small contribution to SAR operations could contribute to saving a life. SAR operations will always require expert professional and human volunteers. However, we can reduce their time, save lives, and reduce costs by providing practical training that is based on real-life scenarios. The content for the proposed training is based on the research work done by the researcher in this area. This research has demonstrated that, based on utilizing drones, the algorithmic approach could support a successful search outcome. Understanding the behavior of the lost person, learning where they may be found, predicting their survivability, and automating the search are all contributions of this work, founded in theory and demonstrated in practice. In crisis management, human behavior constitutes a vital aspect in responding to the crisis; the speed and efficiency of the response often get affected by the difficulty of the context of the operation. Therefore, training in this area plays a significant role in preparing the crisis manager to manage the emotional aspects that lead to decision-making in these critical situations. Since it is crucial to gain high-level strategic choices and the ability to apply crisis management procedures, simulation exercises become central in training crisis managers to gain the needed skills to respond critically to these events. The training will enhance the responders’ ability to make decisions and anticipate possible consequences of their actions through flexible and revolutionary reasoning in responding to the crisis efficiently and quickly. As adult learners, search and rescue teams will be approaching training and learning by taking responsibility of the learning process, appreciate flexible learning and as contributors to the teaching and learning happening during that training. These are all characteristics of adult learning theories. The learner self-reflects, gathers information, collaborates with others and is self-directed. One of the learning strategies associated with adult learning is effective elaboration. It helps learners to remember information in the long term and use it in situations where it might be appropriate. It is also a strategy that can be taught easily and used with learners of different ages. Designers must design reflective activities to improve the student’s intrapersonal awareness.Keywords: training, OER, dementia, drones, search and rescue, adult learning, UDL, instructional design
Procedia PDF Downloads 108127 Pueblos Mágicos in Mexico: The Loss of Intangible Cultural Heritage and Cultural Tourism
Authors: Claudia Rodriguez-Espinosa, Erika Elizabeth Pérez Múzquiz
Abstract:
Since the creation of the “Pueblos Mágicos” program in 2001, a series of social and cultural events had directly affected the heritage conservation of the 121 registered localities until 2018, when the federal government terminated the program. Many studies have been carried out that seek to analyze from different perspectives and disciplines the consequences that these appointments have generated in the “Pueblos Mágicos.” Multidisciplinary groups such as the one headed by Carmen Valverde and Liliana López Levi, have brought together specialists from all over the Mexican Republic to create a set of diagnoses of most of these settlements, and although each one has unique specificities, there is a constant in most of them that has to do with the loss of cultural heritage and that is related to transculturality. There are several factors identified that have fostered a cultural loss, as a direct reflection of the economic crisis that prevails in Mexico. It is important to remember that the origin of this program had as its main objective to promote the growth and development of local economies since one of the conditions for entering the program is that they have less than 20,000 inhabitants. With this goal in mind, one of the first actions that many “Pueblos Mágicos” carried out was to improve or create an infrastructure to receive both national and foreign tourists since this was practically non-existent. Creating hotels, restaurants, cafes, training certified tour guides, among other actions, have led to one of the great problems they face: globalization. Although by itself it is not bad, its impact in many cases has been negative for heritage conservation. The entry into and contact with new cultures has led to the undervaluation of cultural traditions, their transformation and even their total loss. This work seeks to present specific cases of transformation and loss of cultural heritage, as well as to reflect on the problem and propose scenarios in which the negative effects can be reversed. For this text, 36 “Pueblos Mágicos” have been selected for study, based on those settlements that are cited in volumes I and IV (the first and last of the collection) of the series produced by the multidisciplinary group led by Carmen Valverde and Liliana López Levi (researchers from UNAM and UAM Xochimilco respectively) in the project supported by CONACyT entitled “Pueblos Mágicos. An interdisciplinary vision”, of which we are part. This sample is considered representative since it forms 30% of the total of 121 “Pueblos Mágicos” existing at that moment. With this information, the elements of its intangible heritage loss or transformation have been identified in every chapter based on the texts written by the participants of that project. Finally, this text shows an analysis of the effects that this federal program, as a public policy applied to 132 populations, has had on the conservation or transformation of the intangible cultural heritage of the “Pueblos Mágicos.” Transculturality, globalization, the creation of identities and the desire to increase the flow of tourists have impacted the changes that traditions (main intangible cultural heritage) have had in the 18 years that the federal program lasted.Keywords: public policies, cultural tourism, heritage preservation, pueblos mágicos program
Procedia PDF Downloads 190126 From Isolation to Integration: A Biophilic Design Approach for Enhancing Inhabitants’ Well-being in Urban Residential Spaces in Dhaka
Authors: Maliha Afroz Nitu, Shahreen Mukashafat Semontee
Abstract:
The concept of biophilic design has emerged as a transformative approach to restore the intrinsic connection between people and nature, an innate bond disrupted by urbanization and industrialization. As urbanization progresses, it is crucial to raise awareness about these issues in order to ensure people can live and work in healthy environments that enhance well-being. Dhaka, the capital of Bangladesh, faces challenges arising from unplanned urban expansion, leading to a notable disconnect between city dwellers and their natural surroundings, a problem prevalent in rapidly developing megacities. Significant interdisciplinary research consistently shows that connecting indoor and outdoor spaces can improve mental and physical well-being by rekindling a connection with the natural world. However, there is a significant lack of study on the implementation of biophilic design principles in the built environment to tackle these problems, despite the well-documented advantages. The Palashi Government Staff Quarter, a 3.8-acre housing area for government staff with around 1,000 residents in Dhaka, has been selected as a case study. The main goal is to create and implement biophilic design solutions to address social, environmental, and health issues while also enhancing the built environment. A methodology applicable to improving biophilic design is developed according to the needs of the residents. This research uses a comprehensive approach, including site inspections and structured and semi-structured interviews with residents to gather qualitative data on their experiences and needs. A total of ten identical six-story buildings have been surveyed, with varying resident responses providing insight into their different perspectives. Based on these findings, the study proposes alternative design strategies that integrate biophilic elements such as daylight, air, plants, and water into buildings through windows, skylights, clerestories, green walls, vegetation, and constructed water bodies. The objective of these strategies is to improve the built environment that restores the existing disconnection between humans and nature. Comparative analyses of the current and proposed scenarios demonstrate substantial upgrades in the built environment, as well as major improvements in the physical and psychological well-being of residents. Although this research focuses on a particular government housing, the findings can be applied to other residential areas in Dhaka and similar urban environments. The study highlights the importance of biophilic design in housing and provides recommendations for policymakers and architects to improve living conditions by integrating nature into urban settings.Keywords: biophilic design, residential, built environment, human nature connection, urban, Dhaka
Procedia PDF Downloads 33125 Reaching Students Who “Don’t Like Writing” through Scenario Based Learning
Authors: Shahira Mahmoud Yacout
Abstract:
Writing is an essential skill in many vocational, academic environments, and notably workplaces, yet many students perceive writing as being something tiring and boring or maybe a “waste of time”. Studies in the field of foreign languages related this fact might be due to the lack of connection between what is learned in the university and what students come to encounter in real life situations”. Arabic learners felt they needed more language exposure to the context of their future professions. With this idea in mind, Scenario based learning (SBL) is reported to be an educational approach to motivate, engage and stimulate students’ interest and to achieve the desired writing learning outcomes. In addition, researchers suggested Scenario based learning (SBL)as an instructional approach that develops and enhances students skills through developing higher order thinking skills and active learning. It is a subset of problem-based learning and case-based learning. The approach focuses on authentic rhetorical framing reflecting writing tasks in real life situations. It works successfully when used to simulate real-world practices, providing context that reflects the types of situations professionals respond to in writing. It was claimed that using realistic scenarios customized to the course’s learning objectives as it bridged the gap for students between theory and application. Within this context, it is thought that scenario-based learning is an important approach to enhance the learners’ writing skills and to reflect meaningful learning within authentic contexts. As an Arabicforeign language instructor, it was noticed that students find difficulties in adapting writing styles to authentic writing contexts and addressing different audiences and purposes. This idea is supported by studieswho claimed that AFL students faced difficulties with transferring writing skills to situations outside of the classroom context. In addition, it was observed that some of the Arabic textbooks for teaching Arabic as a foreign language lacked topics that initiated higher order thinking skills and stimulated the learners to understand the setting, and created messages appropriate to different audiences, context, and purposes. The goals of this study are to 1)provide a rational for using scenario-based learning approach to improveAFL learners in writing skills, 2) demonstrate how to design/ implement a scenario-based learning technique aligned with the writing course objectives,3) demonstrate samples of scenario-based approach implemented in AFL writing class, and 4)emphasis the role of peer-review along with the instructor’s feedback, in the process of developing the writing skill. Finally, this presentation highlighted and emphasized the importance of using the scenario-based learning approach in writing as a means to mirror students’ real-life situations and engage them in planning, monitoring, and problem solving. This approach helped in making writing an enjoyable experience and clearly useful to students’ future professional careers.Keywords: meaningful learning, real life contexts, scenario based learning, writing skill
Procedia PDF Downloads 98124 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78123 Reducing System Delay to Definitive Care For STEMI Patients, a Simulation of Two Different Strategies in the Brugge Area, Belgium
Authors: E. Steen, B. Dewulf, N. Müller, C. Vandycke, Y. Vandekerckhove
Abstract:
Introduction: The care for a ST-elevation myocardial infarction (STEMI) patient is time-critical. Reperfusion therapy within 90 minutes of initial medical contact is mandatory in the improvement of the outcome. Primary percutaneous coronary intervention (PCI) without previous fibrinolytic treatment, is the preferred reperfusion strategy in patients with STEMI, provided it can be performed within guideline-mandated times. Aim of the study: During a one year period (January 2013 to December 2013) the files of all consecutive STEMI patients with urgent referral from non-PCI facilities for primary PCI were reviewed. Special attention was given to a subgroup of patients with prior out-of-hospital medical contact generated by the 112-system. In an effort to reduce out-of-hospital system delay to definitive care a change in pre-hospital 112 dispatch strategies is proposed for these time-critical patients. Actual time recordings were compared with travel time simulations for two suggested scenarios. A first scenario (SC1) involves the decision by the on scene ground EMS (GEMS) team to transport the out-of-hospital diagnosed STEMI patient straight forward to a PCI centre bypassing the nearest non-PCI hospital. Another strategy (SC2) explored the potential role of helicopter EMS (HEMS) where the on scene GEMS team requests a PCI-centre based HEMS team for immediate medical transfer to the PCI centre. Methods and Results: 49 (29,1% of all) STEMI patients were referred to our hospital for emergency PCI by a non-PCI facility. 1 file was excluded because of insufficient data collection. Within this analysed group of 48 secondary referrals 21 patients had an out-of-hospital medical contact generated by the 112-system. The other 27 patients presented at the referring emergency department without prior contact with the 112-system. The table below shows the actual time data from first medical contact to definitive care as well as the simulated possible gain of time for both suggested strategies. The PCI-team was always alarmed upon departure from the referring centre excluding further in-hospital delay. Time simulation tools were similar to those used by the 112-dispatch centre. Conclusion: Our data analysis confirms prolonged reperfusion times in case of secondary emergency referrals for STEMI patients even with the use of HEMS. In our setting there was no statistical difference in gain of time between the two suggested strategies, both reducing the secondary referral generated delay with about one hour and by this offering all patients PCI within the guidelines mandated time. However, immediate HEMS activation by the on scene ground EMS team for transport purposes is preferred. This ensures a faster availability of the local GEMS-team for its community. In case these options are not available and the guideline-mandated times for primary PCI are expected to be exceeded, primary fibrinolysis should be considered in a non-PCI centre.Keywords: STEMI, system delay, HEMS, emergency medicine
Procedia PDF Downloads 319122 Screening of Osteoporosis in Aging Populations
Authors: Massimiliano Panella, Sara Bortoluzzi, Sophia Russotto, Daniele Nicolini, Carmela Rinaldi
Abstract:
Osteoporosis affects more than 200 million people worldwide. About 75% of osteoporosis cases are undiagnosed or diagnosed only when a bone fracture occurs. Since osteoporosis related fractures are significant determinants of the burden of disease and health and social costs of aging populations, we believe that this is the early identification and treatment of high-risk patients should be a priority in actual healthcare systems. Screening for osteoporosis by dual energy x-ray absorptiometry (DEXA) is not cost-effective for general population. An alternative is pulse-echo ultrasound (PEUS) because of the minor costs. To this end, we developed an early detection program for osteoporosis with PEUS, and we evaluated is possible impact and sustainability. We conducted a cross-sectional study including 1,050 people in Italy. Subjects with >1 major or >2 minor risk factors for osteoporosis were invited to PEUS bone mass density (BMD) measurement at the proximal tibia. Based on BMD values, subjects were classified as healthy subjects (BMD>0.783 g/cm²) and pathological including subjects with suspected osteopenia (0.783≤BMD>0.719 g/cm²) or osteoporosis (BMD ≤ 0.719 g/cm²). The responder rate was 60.4% (634/1050). According to the risk, PEUS scan was recommended to 436 people, of whom 300 (mean age 45.2, 81% women) accepted to participate. We identified 240 (80%) healthy and 60 (20%) pathological subjects (47 osteopenic and 13 osteoporotic). We observed a significant association between high risk people and reduced bone density (p=0.043) with increased risks for female gender, older ages, and menopause (p<0.01). The yearly cost of the screening program was 8,242 euros. With actual Italian fracture incidence rates in osteoporotic patients, we can reasonably expect in 20 years that at least 6 fractures will occur in our sample. If we consider that the mean costs per fracture in Italy is today 16,785 euros, we can estimate a theoretical cost of 100,710 euros. According to literature, we can assume that the early treatment of osteoporosis could avoid 24,170 euros of such costs. If we add the actual yearly cost of the treatments to the cost of our program and we compare this final amount of 11,682 euros to the avoidable costs of fractures (24,170 euros) we can measure a possible positive benefits/costs ratio of 2.07. As a major outcome, our study let us to early identify 60 people with a significant bone loss that were not aware of their condition. This diagnostic anticipation constitutes an important element of value for the project, both for the patients, for the preventable negative outcomes caused by the fractures, and for the society in general, because of the related avoidable costs. Therefore, based on our finding, we believe that the PEUS based screening performed could be a cost-effective approach to early identify osteoporosis. However, our study has some major limitations. In fact, in our study the economic analysis is based on theoretical scenarios, thus specific studies are needed for a better estimation of the possible benefits and costs of our program.Keywords: osteoporosis, prevention, public health, screening
Procedia PDF Downloads 122