Search results for: Abdul Rahman Omar
45 Business Intelligent to a Decision Support Tool for Green Entrepreneurship: Meso and Macro Regions
Authors: Anishur Rahman, Maria Areias, Diogo Simões, Ana Figeuiredo, Filipa Figueiredo, João Nunes
Abstract:
The circular economy (CE) has gained increased awareness among academics, businesses, and decision-makers as it stimulates resource circularity in the production and consumption systems. A large epistemological study has explored the principles of CE, but scant attention eagerly focused on analysing how CE is evaluated, consented to, and enforced using economic metabolism data and business intelligent framework. Economic metabolism involves the ongoing exchange of materials and energy within and across socio-economic systems and requires the assessment of vast amounts of data to provide quantitative analysis related to effective resource management. Limited concern, the present work has focused on the regional flows pilot region from Portugal. By addressing this gap, this study aims to promote eco-innovation and sustainability in the regions of Intermunicipal Communities Região de Coimbra, Viseu Dão Lafões and Beiras e Serra da Estrela, using this data to find precise synergies in terms of material flows and give companies a competitive advantage in form of valuable waste destinations, access to new resources and new markets, cost reduction and risk sharing benefits. In our work, emphasis on applying artificial intelligence (AI) and, more specifically, on implementing state-of-the-art deep learning algorithms is placed, contributing to construction a business intelligent approach. With the emergence of new approaches generally highlighted under the sub-heading of AI and machine learning (ML), the methods for statistical analysis of complex and uncertain production systems are facing significant changes. Therefore, various definitions of AI and its differences from traditional statistics are presented, and furthermore, ML is introduced to identify its place in data science and the differences in topics such as big data analytics and in production problems that using AI and ML are identified. A lifecycle-based approach is then taken to analyse the use of different methods in each phase to identify the most useful technologies and unifying attributes of AI in manufacturing. Most of macroeconomic metabolisms models are mainly direct to contexts of large metropolis, neglecting rural territories, so within this project, a dynamic decision support model coupled with artificial intelligence tools and information platforms will be developed, focused on the reality of these transition zones between the rural and urban. Thus, a real decision support tool is under development, which will surpass the scientific developments carried out to date and will allow to overcome imitations related to the availability and reliability of data.Keywords: circular economy, artificial intelligence, economic metabolisms, machine learning
Procedia PDF Downloads 7244 Investigating the Association between Escherichia Coli Infection and Breast Cancer Incidence: A Retrospective Analysis and Literature Review
Authors: Nadia Obaed, Lexi Frankel, Amalia Ardeljan, Denis Nigel, Anniki Witter, Omar Rashid
Abstract:
Breast cancer is the most common cancer among women, with a lifetime risk of one in eight of all women in the United States. Although breast cancer is prevalent throughout the world, the uneven distribution in incidence and mortality rates is shaped by the variation in population structure, environment, genetics and known lifestyle risk factors. Furthermore, the bacterial profile in healthy and cancerous breast tissue differs with a higher relative abundance of bacteria capable of causing DNA damage in breast cancer patients. Previous bacterial infections may change the composition of the microbiome and partially account for the environmental factors promoting breast cancer. One study found that higher amounts of Staphylococcus, Bacillus, and Enterobacteriaceae, of which Escherichia coli (E. coli) is a part, were present in breast tumor tissue. Based on E. coli’s ability to damage DNA, it is hypothesized that there is an increased risk of breast cancer associated with previous E. coli infection. Therefore, the purpose of this study was to evaluate the correlation between E. coli infection and the incidence of breast cancer. Holy Cross Health, Fort Lauderdale, provided access to the Health Insurance Portability and Accountability (HIPAA) compliant national database for the purpose of academic research. International Classification of Disease 9th and 10th Codes (ICD-9, ICD-10) was then used to conduct a retrospective analysis using data from January 2010 to December 2019. All breast cancer diagnoses and all patients infected versus not infected with E. coli that underwent typical E. coli treatment were investigated. The obtained data were matched for age, Charlson Comorbidity Score (CCI score), and antibiotic treatment. Standard statistical methods were applied to determine statistical significance and an odds ratio was used to estimate the relative risk. A total of 81286 patients were identified and analyzed from the initial query and then reduced to 31894 antibiotic-specific treated patients in both the infected and control group, respectively. The incidence of breast cancer was 2.51% and present in 2043 patients in the E. coli group compared to 5.996% and present in 4874 patients in the control group. The incidence of breast cancer was 3.84% and present in 1223 patients in the treated E. coli group compared to 6.38% and present in 2034 patients in the treated control group. The decreased incidence of breast cancer in the E. coli and treated E. coli groups was statistically significant with a p-value of 2.2x10-16 and 2.264x10-16, respectively. The odds ratio in the E. coli and treated E. coli groups was 0.784 and 0.787 with a 95% confidence interval, respectively (0.756-0.813; 0.743-0.833). The current study shows a statistically significant decrease in breast cancer incidence in association with previous Escherichia coli infection. Researching the relationship between single bacterial species is important as only up to 10% of breast cancer risk is attributable to genetics, while the contribution of environmental factors including previous infections potentially accounts for a majority of the preventable risk. Further evaluation is recommended to assess the potential and mechanism of E. coli in decreasing the risk of breast cancer.Keywords: breast cancer, escherichia coli, incidence, infection, microbiome, risk
Procedia PDF Downloads 25343 The Link between Strategic Sense-Making and Performance in Dubai Public Sector
Authors: Mohammad Rahman, Guy Burton, Megan Mathias
Abstract:
Strategic management as an organizational practice was adopted by the public sector in the New Public Management (NPM) era that began in most parts of the world in the 1980s. Strategy as a new public management concept was subscribed by governments in both developed and developing world, as they were persuaded that clearly defined vision, mission and goals, as well as programs and projects - aligned with the goals - could potentially help achieve government vision at the national level and organizational goals at the service-delivery level. The advocates for strategic management in the public sector saw an inherent link between strategy and performance, claiming that the implementation of organizational strategy has an effect on the overall performance of an organization. Arguably, many government entities that have failed in enhancing team and individual performance had poorly-designed strategy or weak strategy implementation. Another key argument about low-level performance is linked with lack of strategic sense-making and orientation by middle managers in particular. Scholars maintain that employees at all levels need to understand strategic management plan in order to facilitate its implementation. Therefore, involving employees (particularly the middle managers) from the beginning potentially helps an organization avoid the drop in performance, and on the contrary would increase their commitment. The United Arab Emirates (UAE) is well known for adopting public sector reform strategies and tools since the 1990s. This observation is contextually pertinent in the case of the Government of Dubai, which has provided a Strategy Execution Guide to all of its entities to achieve high level strategic success in service delivery. The Dubai public sector also adopts road maps for e-Government, Smart Dubai, Expo 2020, investment, environment, education, health and other sectors. Evidently, some of these strategies are bringing tangible (e.g. Smart Dubai transformation) results in a transformational manner. However, the amount of academic research and literature on the strategy process vis-à-vis staff performance in the Government of Dubai is limited. In this backdrop, this study examines how individual performance of public sector employees in Dubai is linked with their sense-making, engagement and orientation with strategy development and implementation processes. Based on a theoretical framework, this study will undertake a sample-based questionnaire survey amongst middle managers in Dubai public sector to (a) measure the level of engagement of middle managers in strategy development and implementation processes as perceived by them; (b) observe the organizational landscape in which role expectations are placed on middle managers; and (c) examine the impact of employee engagement in strategy development process and the conditions for role expectations on individual performance. The paper is expected to provide new insights on the interface between strategic sense-making and performance in order to contribute a better understanding of the current culture/practices of staff engagement in strategic management in the public sector of Dubai.Keywords: employee performance, government of Dubai, middle managers, strategic sense-making
Procedia PDF Downloads 19742 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control
Procedia PDF Downloads 17441 The Effect of Acute Muscular Exercise and Training Status on Haematological Indices in Adult Males
Authors: Ibrahim Musa, Mohammed Abdul-Aziz Mabrouk, Yusuf Tanko
Abstract:
Introduction: Long term physical training affect the performance of athletes especially the females. Soccer which is a team sport, played in an outdoor field, require adequate oxygen transport system for the maximal aerobic power during exercise in order to complete 90 minutes of competitive play. Suboptimal haematological status has often been recorded in athletes with intensive physical activity. It may be due to the iron depletion caused by hemolysis or haemodilution results from plasma volume expansion. There is lack of data regarding the dynamics of red blood cell variables, in male football players. We hypothesized that, a long competitive season involving frequent matches and intense training could influence red blood cell variables, as a consequence of applying repeated physical loads when compared with sedentary. Methods: This cross sectional study was carried on 40 adult males (20 athletes and 20 non athletes) between 18-25 years of age. The 20 apparently healthy male non athletes were taken as sedentary and 20 male footballers comprise the study group. The university institutional review board (ABUTH/HREC/TRG/36) gave approval for all procedures in accordance with the Declaration of Helsinki. Red blood cell (RBC) concentration, packed cell volume (PCV), and plasma volume were measured in fasting state and immediately after exercise. Statistical analysis was done by using SPSS/ win.20.0 for comparison within and between the groups, using student’s paired and unpaired “t” test respectively. Results: The finding from our study shows that, immediately after termination of exercise, the mean RBC counts and PCV significantly (p<0.005) decreased with significant increased (p<0.005) in plasma volume when compared with pre-exercised values in both group. In addition the post exercise RBC was significantly higher in untrained (261.10±8.5) when compared with trained (255.20±4.5). However, there was no significant differences in the post exercise hematocrit and plasma volume parameters between the sedentary and the footballers. Moreover, beside changes in pre-exercise values among the sedentary and the football players, the resting red blood cell counts and Plasma volume (PV %) was significantly (p < 0.05) higher in the sedentary group (306.30±10.05 x 104 /mm3; 58.40±0.54%) when compared with football players (293.70±4.65 x 104 /mm3; 55.60±1.18%). On the other hand, the sedentary group exhibited significant (p < 0.05) decrease in PCV (41.60±0.54%) when compared with the football players (44.40±1.18%). Conclusions: It is therefore proposed that the acute football exercise induced reduction in RBC and PCV is entirely due to plasma volume expansion, and not of red blood cell hemolysis. In addition, the training status also influenced haematological indices of male football players differently from the sedentary at rest due to adaptive response. This is novel.Keywords: Haematological Indices, Performance Status, Sedentary, Male Football Players
Procedia PDF Downloads 25740 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 20739 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing
Authors: Ahmed Elaksher, Islam Omar
Abstract:
Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition
Procedia PDF Downloads 6338 Effect of Rolling Shear Modulus and Geometric Make up on the Out-Of-Plane Bending Performance of Cross-Laminated Timber Panel
Authors: Md Tanvir Rahman, Mahbube Subhani, Mahmud Ashraf, Paul Kremer
Abstract:
Cross-laminated timber (CLT) is made from layers of timber boards orthogonally oriented in the thickness direction, and due to this, CLT can withstand bi-axial bending in contrast with most other engineered wood products such as laminated veneer lumber (LVL) and glued laminated timber (GLT). Wood is cylindrically anisotropic in nature and is characterized by significantly lower elastic modulus and shear modulus in the planes perpendicular to the fibre direction, and is therefore classified as orthotropic material and is thus characterized by 9 elastic constants which are three elastic modulus in longitudinal direction, tangential direction and radial direction, three shear modulus in longitudinal tangential plane, longitudinal radial plane and radial tangential plane and three Poisson’s ratio. For simplification, timber materials are generally assumed to be transversely isotropic, reducing the number of elastic properties characterizing it to 5, where the longitudinal plane and radial planes are assumed to be planes of symmetry. The validity of this assumption was investigated through numerical modelling of CLT with both orthotropic mechanical properties and transversely isotropic material properties for three softwood species, which are Norway spruce, Douglas fir, Radiata pine, and three hardwood species, namely Victorian ash, Beech wood, and Aspen subjected to uniformly distributed loading under simply supported boundary condition. It was concluded that assuming the timber to be transversely isotropic results in a negligible error in the order of 1 percent. It was also observed that along with longitudinal elastic modulus, ratio of longitudinal shear modulus (GL) and rolling shear modulus (GR) has a significant effect on a deflection for CLT panels of lower span to depth ratio. For softwoods such as Norway spruce and Radiata pine, the ratio of longitudinal shear modulus, GL to rolling shear modulus GR is reported to be in the order of 12 to 15 times in literature. This results in shear flexibility in transverse layers leading to increased deflection under out-of-plane loading. The rolling shear modulus of hardwoods has been found to be significantly higher than those of softwoods, where the ratio between longitudinal shear modulus to rolling shear modulus as low as 4. This has resulted in a significant rise in research into the manufacturing of CLT from entirely from hardwood, as well as from a combination of softwood and hardwoods. The commonly used beam theory to analyze the performance of CLT panels under out-of-plane loads are the Shear analogy method, Gamma method, and k-method. The shear analogy method has been found to be the most effective method where shear deformation is significant. The effect of the ratio of longitudinal shear modulus and rolling shear modulus of cross-layer on the deflection of CLT under uniformly distributed load with respect to its length to depth ratio was investigated using shear analogy method. It was observed that shear deflection is reduced significantly as the ratio of the shear modulus of the longitudinal layer and rolling shear modulus of cross-layer decreases. This indicates that there is significant room for improvement of the bending performance of CLT through developing hybrid CLT from a mix of softwood and hardwood.Keywords: rolling shear modulus, shear deflection, ratio of shear modulus and rolling shear modulus, timber
Procedia PDF Downloads 12737 The Efficacy of Government Strategies to Control COVID 19: Evidence from 22 High Covid Fatality Rated Countries
Authors: Imalka Wasana Rathnayaka, Rasheda Khanam, Mohammad Mafizur Rahman
Abstract:
TheCOVID-19 pandemic has created unprecedented challenges to both the health and economic states in countries around the world. This study aims to evaluate the effectiveness of governments' decisions to mitigate the risks of COVID-19 through proposing policy directions to reduce its magnitude. The study is motivated by the ongoing coronavirus outbreaks and comprehensive policy responses taken by countries to mitigate the spread of COVID-19 and reduce death rates. This study contributes to filling the knowledge by exploiting the long-term efficacy of extensive plans of governments. This study employs a Panel autoregressive distributed lag (ARDL) framework. The panels incorporate both a significant number of variables and fortnightly observations from22 countries. The dependent variables adopted in this study are the fortnightly death rates and the rates of the spread of COVID-19. Mortality rate and the rate of infection data were computed based on the number of deaths and the number of new cases per 10000 people.The explanatory variables are fortnightly values of indexes taken to investigate the efficacy of government interventions to control COVID-19. Overall government response index, Stringency index, Containment and health index, and Economic support index were selected as explanatory variables. The study relies on the Oxford COVID-19 Government Measure Tracker (OxCGRT). According to the procedures of ARDL, the study employs (i) the unit root test to check stationarity, (ii) panel cointegration, and (iii) PMG and ARDL estimation techniques. The study shows that the COVID-19 pandemic forced immediate responses from policymakers across the world to mitigate the risks of COVID-19. Of the four types of government policy interventions: (i) Stringency and (ii) Economic Support have been most effective and reveal that facilitating Stringency and financial measures has resulted in a reduction in infection and fatality rates, while (iii) Government responses are positively associated with deaths but negatively with infected cases. Even though this positive relationship is unexpected to some extent in the long run, social distancing norms of the governments have been broken by the public in some countries, and population age demographics would be a possible reason for that result. (iv) Containment and healthcare improvements reduce death rates but increase the infection rates, although the effect has been lower (in absolute value). The model implies that implementation of containment health practices without association with tracing and individual-level quarantine does not work well. The policy implication based on containment health measures must be applied together with targeted, aggressive, and rapid containment to extensively reduce the number of people infected with COVID 19. Furthermore, the results demonstrate that economic support for income and debt relief has been the key to suppressing the rate of COVID-19 infections and fatality rates.Keywords: COVID-19, infection rate, deaths rate, government response, panel data
Procedia PDF Downloads 7636 Enhancing Seismic Resilience in Urban Environments
Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino
Abstract:
Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability
Procedia PDF Downloads 6835 Assessment of Energy Efficiency and Life Cycle Greenhouse Gas Emission of Wheat Production on Conservation Agriculture to Achieve Soil Carbon Footprint in Bangladesh
Authors: MD Mashiur Rahman, Muhammad Arshadul Haque
Abstract:
Emerging conservation agriculture (CA) is an option for improving soil health and maintaining environmental sustainability for intensive agriculture, especially in the tropical climate. Three years lengthy research experiment was performed in arid climate from 2018 to 2020 at research field of Bangladesh Agricultural Research Station (RARS)F, Jamalpur (soil texture belongs to Agro-Ecological Zone (AEZ)-8/9, 24˚56'11''N latitude and 89˚55'54''E longitude and an altitude of 16.46m) to evaluate the effect of CA approaches on energy use efficiency and a streamlined life cycle greenhouse gas (GHG) emission of wheat production. For this, the conservation tillage practices (strip tillage (ST) and minimum tillage (MT)) were adopted in comparison to the conventional farmers' tillage (CT), with retained a fixed level (30 cm) of residue retention. This study examined the relationship between energy consumption and life cycle greenhouse gas (GHG) emission of wheat cultivation in Jamalpur region of Bangladesh. Standard energy equivalents megajoules (MJ) were used to measure energy from different inputs and output, similarly, the global warming potential values for the 100-year timescale and a standard unit kilogram of carbon dioxide equivalent (kg CO₂eq) was used to estimate direct and indirect GHG emissions from the use of on-farm and off-farm inputs. Farm efficiency analysis tool (FEAT) was used to analyze GHG emission and its intensity. A non-parametric data envelopment (DEA) analysis was used to estimate the optimum energy requirement of wheat production. The results showed that the treatment combination having MT with optimum energy inputs is the best suit for cost-effective, sustainable CA practice in wheat cultivation without compromising with the yield during the dry season. A total of 22045.86 MJ ha⁻¹, 22158.82 MJ ha⁻¹, and 23656.63 MJ ha⁻¹ input energy for the practice of ST, MT, and CT was used in wheat production, and output energy was calculated as 158657.40 MJ ha⁻¹, 162070.55 MJ ha⁻¹, and 149501.58 MJ ha⁻¹, respectively; where energy use efficiency/net energy ratio was found to be 7.20, 7.31 and 6.32. Among these, MT is the most effective practice option taken into account in the wheat production process. The optimum energy requirement was found to be 18236.71 MJ ha⁻¹ demonstrating for the practice of MT that if recommendations are followed, 18.7% of input energy can be saved. The total greenhouse gas (GHG) emission was calculated to be 2288 kgCO₂eq ha⁻¹, 2293 kgCO₂eq ha⁻¹ and 2331 kgCO₂eq ha⁻¹, where GHG intensity is the ratio of kg CO₂eq emission per MJ of output energy produced was estimated to be 0.014 kg CO₂/MJ, 0.014 kg CO₂/MJ and 0.015 kg CO₂/MJ in wheat production. Therefore, CA approaches ST practice with 30 cm residue retention was the most effective GHG mitigation option when the net life cycle GHG emission was considered in wheat production in the silt clay loam soil of Bangladesh. In conclusion, the CA approaches being implemented for wheat production involving MT practice have the potential to mitigate global warming potential in Bangladesh to achieve soil carbon footprint, where the life cycle assessment approach needs to be applied to a more diverse range of wheat-based cropping systems.Keywords: conservation agriculture and tillage, energy use efficiency, life cycle GHG, Bangladesh
Procedia PDF Downloads 10234 An Analysis of Younger Consumers’ Perceptions, Purchasing Decisions, and Pro-Environmental Behavior: A Market Experiment on Green Advertising
Authors: Mokhlisur Rahman
Abstract:
Consumers have developed a sense of responsibility in the past decade, reflecting on their purchasing behavior after viewing an advertisement. Consumers tend to buy ideal products that enable them to be judged by their close network in the opinion world. In such value considerations, any information that feeds consumers' desire for social status helps, which becomes capital for educating consumers on the importance of purchasing green products for manufacturing companies. Companies' effort in manufacturing green products to get high conversion demands a good deal of promotion with quality information and engaging representation. Additionally, converting people from traditional to eco-friendly products requires innovative alternatives to replace the existing product. Considering consumers' understanding of products and their purchasing behavior, it becomes essential for the brands to know the extent to which consumers' level of awareness of the ecosystem is to make them more responsive to green products. Another is brand image plays a vital role in consumers' perception regarding the credibility of the claim regarding the product. Brand image is a significant positive influence on the younger generation, and younger generations tend to engage more in pro-environmental behavior, including purchasing sustainable products. For example, Adidas senses the necessity of satisfying consumers with something that brings more profits and serves the planet. Several of their eco-friendly products are already in the market, and one is UltraBOOST DNA parley, made from 3D-printed recycled ocean waste. As a big brand image, Adidas has leveraged an interest among the younger generation by incorporating sustainability into its advertising. Therefore, influential brands' effort in the sustainable revolution through engaging advertisement makes it more prominent by educating consumers about the reason behind launching the product. This study investigates younger consumers' attitudes toward sustainability, brand recognition, exposure to green advertising, willingness to receive more green advertising, purchasing green products, and motivation. The study conducts a market experiment by creating two video advertisements: a sustainable product video advertisement and a non-sustainable product video advertisement. Both the videos have similar content design and the same length of 2 minutes, but the messages are different based on the identical product type college bags. The first video advertisement promotes eco-friendly college bags made from biodegradable raw materials, and the second promotes non-sustainable college bags made from plastics. After viewing the videos, consumers make purchasing decisions and complete an online survey to collect their attitudes toward sustainable products. The study finds the importance of a sense of responsibility to the consumers for climate change issues. Also, it empowers people to take a step, even small, and increases environmental awareness. This study provides companies with the knowledge to participate in sustainable product launches by collecting consumers' perceptions and attitudes toward green products. Also, it shows how important it is to build a brand's image for the younger generation.Keywords: brand-image, environment, green-advertising, sustainability, younger-consumer
Procedia PDF Downloads 6833 Mechanical and Durability Characteristics of Roller Compacted Geopolymer Concrete Using Recycled Concrete Aggregate
Authors: Syfur Rahman, Mohammad J. Khattak
Abstract:
Every year a huge quantity of recycling concrete aggregate (RCA) is generated in the United States of America. Utilization of RCA can solve the storage problem, prevent environmental pollution, and reduce the construction cost. However, due to the overall low strength and durability characteristics of RCA, its usages are limited to a certain area like a landfill, low strength base material, replacement of a few percentages of virgin aggregates in Portland cement concrete, etc. This study focuses on the improvement of the strength and durability characteristics of RCA by introducing the concept of roller-compacted geopolymer concrete. In this research, developed roller-compacted geopolymer concrete (RCGPC) and roller-compacted cement concrete (RCC) mixtures containing 100% recycled concrete aggregate were evaluated and compared. Several selected RCGPC mixtures were investigated to find out the effect of mixture variables, including sodium hydroxide (NaOH) molar concentration, sodium silicate (Na₂SiO₃), to sodium hydroxide (NaOH) ratio on the strength, stiffness and durability characteristics of the developed RCGPC. Sodium hydroxide (NaOH) and sodium silicate (Na₂SiO₃) were mixed in different ratios to synthesize the alkali activator. American Concrete Pavement Association (ACPA) recommended RCC gradation was used with a maximum nominal aggregate size of 19 mm with a 4% fine particle passing 0.075 mm sieve. The mixtures were made using NaOH molar concentration of 8M and 10M along with, Na₂SiO₃ to NaOH ratio of 0 and 1 by mass and 15% class F fly ash. Optimum alkali content and moisture content were determined for each RCGPC and RCC mixtures, respectively, using modified proctor test. Compressive strength, semi-circular bending beam strength, and dynamic modulus test were conducted to evaluate the mechanistic characteristics of both mixtures. To determine the optimum curing conditions for RCGPC, effects of different curing temperature and curing duration on compressive strength were also studied. Sulphate attack and freeze-thaw tests were also carried out to assess the durability properties of the developed mixtures. X-ray diffraction (XRD) was used for morphology and microstructure analysis. From the optimum moisture content results, it was found that RCGPC has high alkali content, which was mainly due to the high absorption capacity of RCA. It was found that the mixtures with Na₂SiO₃ to NaOH ratio of 1 yielded about 60% higher compressive strength than the ratio of 0. Further, the mixtures using 10M NaOH concentrations and alkali ratio of 1 produced about 28 MPa of compressive strength, which was around 33% higher than 8M NaOH mixtures. Similar results were obtained for elastic and dynamic modulus of the mixtures. On the other hand, the semi-circular bending beam strength remained the same for both 8 and 10 molar NaOH geopolymer mixtures. Formation of new geopolymeric compounds and chemical bonds in the newly formed novel RCGPC mixtures were also discovered using XRD analysis. The results of mechanical and durability testing further revealed that RCGPC performed similarly to that of RCC mixtures. Based on the results of mechanical and durability testing, the developed RCGPC mixtures using 100% recycled concrete could be used as a cost-effective solution for the construction of pavement structures.Keywords: roller compacted concrete, geopolymer concrete, recycled concrete aggregate, concrete pavement, fly ash
Procedia PDF Downloads 13732 Follicular Thyroid Carcinoma in a Developing Country: A Retrospective Study of 10 Years
Authors: Abdul Aziz, Muhammad Qamar Masood, Saadia Sattar, Saira Fatima, Najmul Islam
Abstract:
Introduction: The most common endocrine tumor is thyroid cancer. Follicular Thyroid Carcinoma (FTC) accounts for 5%–10% of all thyroid cancers. Patients with FTC frequently present with more advanced stage diseases and a higher occurrence of distant metastases because of the propensity of vascular invasion. FTC is mainly treated with surgery, while radioactive iodine therapy is the main adjuvant therapy as per ATA guidelines. In many developing countries, surgical facilities and radioactive iodine are in short supply; therefore, understanding follicular thyroid cancer trends may help developing countries plan and use resources more effectively. Methodology: It was a retrospective observational study of FTC patients of age 18 years and above conducted at Aga Khan University Hospital, Karachi, from 1st January 2010 to 31st December 2019. Results: There were 404 patients with thyroid carcinoma, out of which forty (10.1%) were FTC. 50% of the patients were in the 41-60 years age group, and the female to male ratio was 1.5: 1. Twenty-four patients (60%) presented with complain of neck swelling followed by metastasis (20%) and compressive symptoms (20%). The most common site of metastasis was bone (87.5%), followed by lung (12.5%). The pre-operative thyroglobulin level was done in six out of eight metastatic patients (75%) in which it was elevated. This emphasizes the importance of checking thyroglobulin level in unusual presentation (bone pain, fractures) of a patient having neck swelling also to help in establishing the primary source of tumor. There was no complete documentation of ultrasound features of the thyroid gland in all the patients, which is an important investigation done in the initial evaluation of thyroid nodule. On FNAC, 50% (20 patients) had Bethesda category III-IV nodules, while 10% ( 4 patients ) had Bethesda category II. In sixteen patients, FNAC was not done as they presented with compressive symptoms or metastasis. Fifty percent had a total thyroidectomy and 50% had subtotal followed by completion thyroidectomy, plus ten patients had lymph node dissection, out of which seven had histopathological lymph node involvement. On histopathology, twenty-three patients (57.5%) had minimally invasive, while seventeen (42.5%) had widely invasive follicular thyroid carcinoma. The capsular invasion was present in thirty-three patients (82.5%); one patient had no capsular invasion, but there was a vascular invasion. Six patients' histopathology had no record of capsular invasion. In contrast, the lymphovascular invasion was present in twenty-six patients (65%). In this study, 65 % of the patients had clinical stage 1 disease, while 25% had stage 2 and 10% had clinical stage 4. Seventeen patients (42.5%) had received RAI 30-100 mCi, while ten patients (25%) received more than 100 mCi. Conclusion: FTC demographic and clinicopathological presentation are the same in Pakistan as compared to other countries. Surgery followed by RAI is the mainstay of treatment. Thus understanding the trend of FTC and proper planning and utilization of the resources will help the developing countries in effectively treating the FTC.Keywords: thyroid carcinoma, follicular thyroid carcinoma, clinicopathological features, developing countries
Procedia PDF Downloads 19131 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration
Procedia PDF Downloads 16030 Prevalence and Risk Factors of Cardiovascular Diseases among Bangladeshi Adults: Findings from a Cross Sectional Study
Authors: Fouzia Khanam, Belal Hossain, Kaosar Afsana, Mahfuzar Rahman
Abstract:
Aim: Although cardiovascular diseases (CVD) has already been recognized as a major cause of death in developed countries, its prevalence is rising in developing countries as well, and engendering a challenge for the health sector. Bangladesh has experienced an epidemiological transition from communicable to non-communicable diseases over the last few decades. So, the rising prevalence of CVD and its risk factors are imposing a major problem for the country. We aimed to examine the prevalence of CVDs and socioeconomic and lifestyle factors related to it from a population-based survey. Methods: The data used for this study were collected as a part of a large-scale cross-sectional study conducted to explore the overall health status of children, mothers and senior citizens of Bangladesh. Multistage cluster random sampling procedure was applied by considering unions as clusters and households as the primary sampling unit to select a total of 11,428 households for the base survey. Present analysis encompassed 12338 respondents of ≥ 35 years, selected from both rural areas and urban slums of the country. Socio-economic, demographic and lifestyle information were obtained through individual by a face-to-face interview which was noted in ODK platform. And height, weight, blood pressure and glycosuria were measured using standardized methods. Chi-square test, Univariate modified Poisson regression model, and multivariate modified Poisson regression model were done using STATA software (version 13.0) for analysis. Results: Overall, the prevalence of CVD was 4.51%, of which 1.78% had stroke and 3.17% suffered from heart diseases. Male had higher prevalence of stroke (2.20%) than their counterparts (1.37%). Notably, thirty percent of respondents had high blood pressure and 5% population had diabetes and more than half of the population was pre-hypertensive. Additionally, 20% were overweight, 77% were smoker or consumed smokeless tobacco and 28% of respondents were physically inactive. Eighty-two percent of respondents took extra salt while eating and 29% of respondents had deprived sleep. Furthermore, the prevalence of risk factor of CVD varied according to gender. Women had a higher prevalence of overweight, obesity and diabetes. Women were also less physically active compared to men and took more extra salt. Smoking was lower in women compared to men. Moreover, women slept less compared to their counterpart. After adjusting confounders in modified Poisson regression model, age, gender, occupation, wealth quintile, BMI, extra salt intake, daily sleep, tiredness, diabetes, and hypertension remained as risk factors for CVD. Conclusion: The prevalence of CVD is significant in Bangladesh, and there is an evidence of rising trend for its risk factors such as hypertension, diabetes especially in older population, women and high-income groups. Therefore, in this current epidemiological transition, immediate public health intervention is warranted to address the overwhelming CVD risk.Keywords: cardiovascular diseases, diabetes, hypertension, stroke
Procedia PDF Downloads 38129 De-Densifying Congested Cores of Cities and Their Emerging Design Opportunities
Authors: Faith Abdul Rasak Asharaf
Abstract:
Every city has a threshold known as urban carrying capacity based on which it can withstand a particular density of people, above which the city might need to resort to measures like expanding its boundaries or growing vertically. As a result of this circumstance, the number of squatter communities is growing, as is the claustrophobic feeling of being confined inside a "concrete jungle." The expansion of suburbs, commercial areas, and industrial real estate in the areas surrounding medium-sized cities has resulted in changes to their landscapes and urban forms, as well as a systematic shift in their role in the urban hierarchy when functional endowment and connections to other territories are considered. The urban carrying capacity idea provides crucial guidance for city administrators and planners in better managing, designing, planning, constructing, and distributing urban resources to satisfy the huge demands of an evergrowing urban population. An ecological footprint is a criterion of urban carrying capacity, which is the amount of land required to provide humanity with renewable resources and absorb its trash. However, as each piece of land has its unique carrying capacity, including ecological, social, and economic considerations, these metropolitan areas begin to reach a saturation point over time. Various city models have been tried throughout the years to meet the increasing urban population density by moving the zones of work, life, and leisure to achieve maximum sustainable growth. The current scenario is that of a vertical city and compact city concept, in which the maximum density of people is attempted to fit into a definite area using efficient land use and a variety of other strategies, but this has proven to be a very unsustainable method of growth, as evidenced by the COVID-19 period. Due to a shortage of housing and basic infrastructure, densely populated cities gave rise to massive squatter communities, unable to accommodate the overflowing migrants. To achieve optimum carrying capacity, planning measures such as polycentric city and diffuse city concepts can be implemented, which will help to relieve the congested city core by relocating certain sectors of the town to the city periphery, which will help to create newer spaces for design in terms of public space, transportation, and housing, which is a major concern in the current scenario. The study's goal is focused on suggesting design options and solutions in terms of placemaking for better urban quality and urban life for the citizens once city centres have been de-densified based on urban carrying capacity and ecological footprint, taking the case of Kochi as an apt example of a highly densified city core, focusing on Edappally, which is an agglomeration of many urban factors.Keywords: urban carrying capacity, urbanization, urban sprawl, ecological footprint
Procedia PDF Downloads 7928 Clinical Response of Nuberol Forte® (Paracetamol 650 MG+Orphenadrine 50 MG) For Pain Management with Musculoskeletal Conditions in Routine Pakistani Practice (NFORTE-EFFECT)
Authors: Shahid Noor, Kazim Najjad, Muhammad Nasir, Irshad Bhutto, Abdul Samad Memon, Khurram Anwar, Tehseen Riaz, Mian Muhammad Hanif, Nauman A. Mallik, Saeed Ahmed, Israr Ahmed, Ali Yasir
Abstract:
Background: Musculoskeletal pain is the most common complaint presented to the health practitioner. It is well known that untreated or under-treated pain can have a significant negative impact on an individual’s quality of life (QoL). Objectives: This study was conducted across 10 sites in six (6) major cities of Pakistan to evaluate the tolerability, safety, and the clinical response of Nuberol Forte® (Paracetamol 650 mg + Orphenadrine 50 mg) to musculoskeletal pain in routine Pakistani practice and its impact on improving the patient’s QoL. Design & Methods: This NFORT-EFFECT observational, prospective multicenter study was conducted in compliance with Good Clinical Practice guidelines and local regulatory requirements. The study sponsor was "The Searle Company Limited, Pakistan. To maintain the GCP compliances, the sponsor assigned the CRO for the site and data management. Ethical approval was obtained from an independent ethics committee. The IEC reviewed the progress of the study. Written informed consent was obtained from the study participants, and their confidentiality was maintained throughout the study. A total of 399 patients with known prescreened musculoskeletal conditions and pain who attended the study sites were recruited, as per the inclusion/exclusion criteria (clinicaltrials.gov ID# NCT04765787). The recruited patients were then prescribed Paracetamol (650 mg) and Orphenadrine (50 mg) combination (Nuberol Forte®) for 7 to 14 days as per the investigator's discretion based on the pain intensity. After the initial screening (visit 1), a follow-up visit was conducted after 1-2 weeks of the treatment (visit 2). Study Endpoints: The primary objective was to assess the pain management response of Nuberol Forte treatment and the overall safety of the drug. The Visual Analogue Scale (VAS) scale was used to measure pain severity. Secondary to pain, the patients' health-related quality of life (HRQoL) was also assessed using the Muscle, Joint Measure (MJM) scale. The safety was monitored on the first dose by the patients. These assessments were done on each study visit. Results: Out of 399 enrolled patients, 49.4% were males, and 50.6% were females with a mean age of 47.24 ± 14.20 years. Most patients were presented with Knee Osteoarthritis (OA), i.e., 148(38%), followed by backache 70(18.2%). A significant reduction in the mean pain score was observed after the treatment with the combination of Paracetamol and Orphenadrine (p<0.05). Furthermore, an overall improvement in the patient’s QoL was also observed. During the study, only ten patients reported mild adverse events (AEs). Conclusion: The combination of Paracetamol and Orphenadrine (Nuberol Forte®) exhibited effective pain management among patients with musculoskeletal conditions and also improved their QoL.Keywords: musculoskeletal pain, orphenadrine/paracetamol combination, pain management, quality of life, Pakistani population
Procedia PDF Downloads 16927 A Long-Standing Methodology Quest Regarding Commentary of the Qur’an: Modern Debates on Function of Hermeneutics in the Quran Scholarship in Turkey
Authors: Merve Palanci
Abstract:
This paper aims to reveal and analyze methodology debates on Qur’an Commentary in Turkish Scholarship and to make sound inductions on the current situation, with reference to the literature evolving around the credibility of Hermeneutics when the case is Qur’an commentary and methodological connotations related to it, together with the other modern approaches to the Qur’an. It is fair to say that Tafseer, constituting one of the main parts of basic Islamic sciences, has drawn great attention from both Muslim and non-Muslim scholars for a long time. And with the emplacement of an acute junction between natural sciences and social sciences in the post-enlightenment period, this interest seems to pave the way for methodology discussions that are conducted by theology spheres, occupying a noticeable slot in Tafseer literature, as well. A panoramic glance at the classical treatise in relation to the methodology of Tafseer, namely Usul al-Tafseer, leads the reader to the conclusion that these classics are intrinsically aimed at introducing the Qur’an and its early history of formation as a corpus and providing a better understanding of its content. To illustrate, the earliest methodology work extant for Qur’an commentary, al- Aql wa’l Fahm al- Qur’an by Harith al-Muhasibi covers content that deals with Qur’an’s rhetoric, its muhkam and mutashabih, and abrogation, etc. And most of the themes in question are evident to share a common ground: understanding the Scripture and producing an accurate commentary to be built on this preliminary phenomenon of understanding. The content of other renowned works in an overtone of Tafseer methodology, such as Funun al Afnan, al- Iqsir fi Ilm al- Tafseer, and other succeeding ones al- Itqan and al- Burhan is also rich in hints related to preliminary phenomena of understanding. However, these works are not eligible for being classified as full-fledged methodology manuals assuring a true understanding of the Qur’an. And Hermeneutics is believed to supply substantial data applicable to Qur’an commentary as it deals with the nature of understanding itself. Referring to the latest tendencies in Tafseer methodology, this paper envisages to centralize hermeneutical debates in modern scholarship of Qur’an commentary and the incentives that lead scholars to apply for Hermeneutics in Tafseer literature. Inspired from these incentives, the study involves three parts. In the introduction part, this paper introduces key features of classical methodology works in general terms and traces back the main methodological shifts of modern times in Qur’an commentary. To this end, revisionist Ecole, scientific Qur’an commentary ventures, and thematic Qur’an commentary are included and analysed briefly. However, historical-critical commentary on the Quran, as it bears a close relationship with hermeneutics, is handled predominantly. The second part is based on the hermeneutical nature of understanding the Scripture, revealing a timeline for the beginning of hermeneutics debates in Tafseer, and Fazlur Rahman’s(d.1988) influence will be manifested for establishing a theoretical bridge. In the following part, reactions against the application of Hermeneutics in Tafseer activity and pro-hermeneutics works will be revealed through cross-references to the prominent figures of both, and the literature in question in theology scholarship in Turkey will be explored critically.Keywords: hermeneutics, Tafseer, methodology, Ulum al- Qur’an, modernity
Procedia PDF Downloads 7526 E-Waste Generation in Bangladesh: Present and Future Estimation by Material Flow Analysis Method
Authors: Rowshan Mamtaz, Shuvo Ahmed, Imran Noor, Sumaiya Rahman, Prithvi Shams, Fahmida Gulshan
Abstract:
Last few decades have witnessed a phenomenal rise in the use of electrical and electronic equipment globally in our everyday life. As these items reach the end of their lifecycle, they turn into e-wastes and contribute to the waste stream. Bangladesh, in conformity with the global trend and due to its ongoing rapid growth, is also using electronics-based appliances and equipment at an increasing rate. This has caused a corresponding increase in the generation of e-wastes. Bangladesh is a developing country; its overall waste management system, is not yet efficient, nor is it environmentally sustainable. Most of its solid wastes are disposed of in a crude way at dumping sites. Addition of e-wastes, which often contain toxic heavy metals, into its waste stream has made the situation more difficult and challenging. Assessment of generation of e-wastes is an important step towards addressing the challenges posed by e-wastes, setting targets, and identifying the best practices for their management. Understanding and proper management of e-wastes is a stated item of the Sustainable Development Goals (SDG) campaign, and Bangladesh is committed to fulfilling it. A better understanding and availability of reliable baseline data on e-wastes will help in preventing illegal dumping, promote recycling, and create jobs in the recycling sectors and thus facilitate sustainable e-waste management. With this objective in mind, the present study has attempted to estimate the amount of e-wastes and its future generation trend in Bangladesh. To achieve this, sales data on eight selected electrical and electronic products (TV, Refrigerator, Fan, Mobile phone, Computer, IT equipment, CFL (Compact Fluorescent Lamp) bulbs, and Air Conditioner) have been collected from different sources. Primary and secondary data on the collection, recycling, and disposal of the e-wastes have also been gathered by questionnaire survey, field visits, interviews, and formal and informal meetings with the stakeholders. Material Flow Analysis (MFA) method has been applied, and mathematical models have been developed in the present study to estimate e-waste amounts and their future trends up to the year 2035 for the eight selected electrical and electronic equipment. End of life (EOL) method is adopted in the estimation. Model inputs are products’ annual sale/import data, past and future sales data, and average life span. From the model outputs, it is estimated that the generation of e-wastes in Bangladesh in 2018 is 0.40 million tons and by 2035 the amount will be 4.62 million tons with an average annual growth rate of 20%. Among the eight selected products, the number of e-wastes generated from seven products are increasing whereas only one product, CFL bulb, showed a decreasing trend of waste generation. The average growth rate of e-waste from TV sets is the highest (28%) while those from Fans and IT equipment are the lowest (11%). Field surveys conducted in the e-waste recycling sector also revealed that every year around 0.0133 million tons of e-wastes enter into the recycling business in Bangladesh which may increase in the near future.Keywords: Bangladesh, end of life, e-waste, material flow analysis
Procedia PDF Downloads 19825 Advertising Campaigns for a Sustainable Future: The Fight against Plastic Pollution in the Ocean
Authors: Mokhlisur Rahman
Abstract:
Ocean inhibits one of the most complex ecosystems on the planet that regulates the earth's climate and weather by providing us with compatible weather to live. Ocean provides food by extending various ways of lifestyles that are dependent on it, transportation by accommodating the world's biggest carriers, recreation by offering its beauty in many moods, and home to countless species. At the essence of receiving various forms of entertainment, consumers choose to be close to the ocean while performing many fun activities. Which, at some point, upsets the stomach of the ocean by threatening marine life and the environment. Consumers throw the waste into the ocean after using it. Most of them are plastics that float over the ocean and turn into thousands of micro pieces that are hard to observe with the naked eye but easily eaten by the sea species. Eventually, that conflicts with the natural consumption process of any living species, making them sick. This information is not known by most consumers who go to the sea or seashores occasionally to spend time, nor is it widely discussed, which creates an information gap among consumers. However, advertising is a powerful tool to educate people about ocean pollution. This abstract analyzes three major ocean-saving advertisement campaigns that use innovative and advanced technology to get maximum exposure. The study collects data from the selected campaigns' websites and retrieves all available content related to messages, videos, and images. First, the SeaLegacy campaign uses stunning images to create awareness among the people; they use social media content, videos, and other educational content. They create content and strategies to build an emotional connection among the consumers that encourage them to move on an action. All the messages in their campaign empower consumers by using powerful words. Second, Ocean Conservancy Campaign uses social media marketing, events, and educational content to protect the ocean from various pollutants, including plastics, climate change, and overfishing. They use powerful images and videos of marine life. Their mission is to create evidence-based solutions toward a healthy ocean. Their message includes the message regarding the local communities along with the sea species. Third, ocean clean-up is a campaign that applies strategies using innovative technologies to remove plastic waste from the ocean. They use social media, digital, and email marketing to reach people and raise awareness. They also use images and videos to evoke an emotional response to take action. These tree advertisements use realistic images, powerful words, and the presence of living species in the imagery presentation, which are eye-catching and can grow emotional connection among the consumers. Identifying the effectiveness of the messages these advertisements carry and their strategies highlights the knowledge gap of mass people between real pollution and its consequences, making the message more accessible to the mass of people. This study aims to provide insights into the effectiveness of ocean-saving advertisement campaigns and their impact on the public's awareness of ocean conservation. The findings from this study help shape future campaigns.Keywords: advertising-campaign, content-creation, images ocean-saving technology, videos
Procedia PDF Downloads 7824 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 7023 Comparing Community Health Agents, Physicians and Nurses in Brazil's Family Health Strategy
Authors: Rahbel Rahman, Rogério Meireles Pinto, Margareth Santos Zanchetta
Abstract:
Background: Existing shortcomings of current health-service delivery include poor teamwork, competencies that do not address consumer needs, and episodic rather than continuous care. Brazil’s Sistema Único de Saúde (Unified Health System, UHS) is acknowledged worldwide as a model for delivering community-based care through Estratégia Saúde da Família (FHS; Family Health Strategy) interdisciplinary teams, comprised of Community Health Agents (in Portuguese, Agentes Comunitário de Saude, ACS), nurses, and physicians. FHS teams are mandated to collectively offer clinical care, disease prevention services, vector control, health surveillance and social services. Our study compares medical providers (nurses and physicians) and community-based providers (ACS) on their perceptions of work environment, professional skills, cognitive capacities and job context. Global health administrators and policy makers can leverage on similarities and differences across care providers to develop interprofessional training for community-based primary care. Methods: Cross-sectional data were collected from 168 ACS, 62 nurses and 32 physicians in Brazil. We compared providers’ demographic characteristics (age, race, and gender) and job context variables (caseload, work experience, work proximity to community, the length of commute, and familiarity with the community). Providers perceptions were compared to their work environment (work conditions and work resources), professional skills (consumer-input, interdisciplinary collaboration, efficacy of FHS teams, work-methods and decision-making autonomy), and cognitive capacities (knowledge and skills, skill variety, confidence and perseverance). Descriptive and bi-variate analysis, such as Pearson Chi-square and Analysis of Variance (ANOVA) F-tests, were performed to draw comparisons across providers. Results: Majority of participants were ACS (64%); 24% nurses; and 12% physicians. Majority of nurses and ACS identified as mixed races (ACS, n=85; nurses, n=27); most physicians identified as males (n=16; 52%), and white (n=18; 58%). Physicians were less likely to incorporate consumer-input and demonstrated greater decision-making autonomy than nurses and ACS. ACS reported the highest levels of knowledge and skills but the least confidence compared to nurses and physicians. ACS, nurses, and physicians were efficacious that FHS teams improved the quality of health in their catchment areas, though nurses tend to disagree that interdisciplinary collaboration facilitated their work. Conclusion: To our knowledge, there has been no study comparing key demographic and cognitive variables across ACS, nurses and physicians in the context of their work environment and professional training. We suggest that global health systems can leverage upon the diverse perspectives of providers to implement a community-based primary care model grounded in interprofessional training. Our study underscores the need for in-service trainings to instill reflective skills of providers, improve communication skills of medical providers and curative skills of ACS. Greater autonomy needs to be extended to community based providers to offer care integral to addressing consumer and community needs.Keywords: global health systems, interdisciplinary health teams, community health agents, community-based care
Procedia PDF Downloads 22922 The Effect of Non-Surgical Periodontal Therapy on Metabolic Control in Children
Authors: Areej Al-Khabbaz, Swapna Goerge, Majedah Abdul-Rasoul
Abstract:
Introduction: The most prevalent periodontal disease among children is gingivitis, and it usually becomes more severe in adolescence. A number of intervention studies suggested that resolution of periodontal inflammation can improve metabolic control in patients diagnosed with diabetes mellitus. Aim: to assess the effect of non-surgical periodontal therapy on glycemic control of children diagnosed with diabetes mellitus. Method: Twenty-eight children diagnosed with diabetes mellitus were recruited with established diagnosis diabetes for at least 1 year. Informed consent and child assent form were obtained from children and parents prior to enrolment. The dental examination for the participants was performed on the same week directly following their annual medical assessment. All patients had their glycosylated hemoglobin (HbA1c%) test one week prior to their annual medical and dental visit and 3 months following non-surgical periodontal therapy. All patients received a comprehensive periodontal examination The periodontal assessment included clinical attachment loss, bleeding on probing, plaque score, plaque index and gingival index. All patients were referred for non-surgical periodontal therapy, which included oral hygiene instruction and motivation followed by supra-gingival and subg-ingival scaling using ultrasonic and hand instruments. Statistical Analysis: Data were entered and analyzed using the Statistical Package for Social Science software (SPSS, Chicago, USA), version 18. Statistical analysis of clinical findings was performed to detect differences between the two groups in term of periodontal findings and HbA1c%. Binary logistic regression analysis was performed in order to examine which factors were significant in multivariate analysis after adjusting for confounding between effects. The regression model used the dependent variable ‘Improved glycemic control’, and the independent variables entered in the model were plaque index, gingival index, bleeding %, plaque Statistical significance was set at p < 0.05. Result: A total of 28 children. The mean age of the participants was 13.3±1.92 years. The study participants were divided into two groups; Compliant group (received dental scaling) and non-complaints group (received oral hygiene instructions only). No statistical difference was found between compliant and non-compliant group in age, gender distribution, oral hygiene practice and the level of diabetes control. There was a significant difference between compliant and non-compliant group in term of improvement of HBa1c before and after periodontal therapy. Mean gingival index was the only significant variable associated with improved glycemic control level. In conclusion, this study has demonstrated that non-surgical mechanical periodontal therapy can improve HbA1c% control. The result of this study confirmed that children with diabetes mellitus who are compliant to dental care and have routine professional scaling may have better metabolic control compared to diabetic children who are erratic with dental care.Keywords: children, diabetes, metabolic control, periodontal therapy
Procedia PDF Downloads 16121 A Postmodern Framework for Quranic Hermeneutics
Authors: Christiane Paulus
Abstract:
Post-Islamism assumes that the Quran should not be viewed in terms of what Lyotard identifies as a ‘meta-narrative'. However, its socio-ethical content can be viewed as critical of power discourse (Foucault). Practicing religion seems to be limited to rites and individual spirituality, taqwa. Alternatively, can we build on Muhammad Abduh's classic-modern reform and develop it through a postmodernist frame? This is the main question of this study. Through his general and vague remarks on the context of the Quran, Abduh was the first to refer to the historical and cultural distance of the text as an obstacle for interpretation. His application, however, corresponded to the modern absolute idea of authentic sharia. He was followed by Amin al-Khuli, who hermeneutically linked the content of the Quran to the theory of evolution. Fazlur Rahman and Nasr Hamid abu Zeid remain reluctant to go beyond the general level in terms of context. The hermeneutic circle, therefore, persists in challenging, how to get out to overcome one’s own assumptions. The insight into and the acceptance of the lasting ambivalence of understanding can be grasped as a postmodern approach; it is documented in Derrida's discovery of the shift in text meanings, difference, also in Lyotard's theory of différend. The resulting mixture of meanings (Wolfgang Welsch) can be read together with the classic ambiguity of the premodern interpreters of the Quran (Thomas Bauer). Confronting hermeneutic difficulties in general, Niklas Luhmann proves every description an attribution, tautology, i.e., remaining in the circle. ‘De-tautologization’ is possible, namely by analyzing the distinctions in the sense of objective, temporal and social information that every text contains. This could be expanded with the Kantian aesthetic dimension of reason (critique of pure judgment) corresponding to the iʽgaz of the Coran. Luhmann asks, ‘What distinction does the observer/author make?’ Quran as a speech from God to the first listeners could be seen as a discourse responding to the problems of everyday life of that time, which can be viewed as the general goal of the entire Qoran. Through reconstructing koranic Lifeworlds (Alfred Schütz) in detail, the social structure crystallizes the socio-economic differences, the enormous poverty. The koranic instruction to provide the basic needs for the neglected groups, which often intersect (old, poor, slaves, women, children), can be seen immediately in the text. First, the references to lifeworlds/social problems and discourses in longer koranic passages should be hypothesized. Subsequently, information from the classic commentaries could be extracted, the classical Tafseer, in particular, contains rich narrative material for reconstructing. By selecting and assigning suitable, specific context information, the meaning of the description becomes condensed (Clifford Geertz). In this manner, the text gets necessarily an alienation and is newly accessible. The socio-ethical implications can thus be grasped from the difference of the original problem and the revealed/improved order/procedure; this small step can be materialized as such, not as an absolute solution but as offering plausible patterns for today’s challenges as the Agenda 2030.Keywords: postmodern hermeneutics, condensed description, sociological approach, small steps of reform
Procedia PDF Downloads 21920 Configuration of Water-Based Features in Islamic Heritage Complexes and Vernacular Architecture: An Analysis into Interactions of Morphology, Form, and Climatic Performance
Authors: Mustaffa Kamal Bashar Mohd Fauzi, Puteri Shireen Jahn Kassim, Nurul Syala Abdul Latip
Abstract:
It is increasingly realized that sustainability includes both a response to the climatic and cultural context of a place. To assess the cultural context, a morphological analysis of urban patterns from heritage legacies is necessary. While the climatic form is derived from an analysis of meteorological data, cultural patterns and forms must be abstracted from a typological and morphological study. This current study aims to analyzes morphological and formal elements of water-based architectural and urban design of past Islamic vernacular complexes in the hot arid regions and how a vast utilization of water was shaped and sited to act as cooling devices for an entire complex. Apart from its pleasant coolness, water can be used in an aesthetically way such as emphasizing visual axes, vividly enhancing the visual of the surrounding environment and symbolically portraying the act of purity in the design. By comparing 2 case studies based on the analysis of interactions of water features into the form, planning and morphology of 2 Islamic heritage complexes, Fatehpur Sikri (India) and Lahore Fort (Pakistan) with a focus on Shish Mahal of Lahore Fort in terms of their mass, architecture and urban planning, it is agreeable that water plays an integral role in their climatic amelioration via different methods of water conveyance system. Both sites are known for their substantial historical values and prominent for their sustainable vernacular buildings for example; the courtyard of Shish Mahal in Lahore fort are designed to provide continuous coolness by constructing various miniatures water channels that run underneath the paved courtyard. One of the most remarkable features of this system that all water is made dregs-free before it was inducted into these underneath channels. In Fatehpur Sikri, the method of conveyance seems differed from Lahore Fort as the need to supply water to the ridge where Fatehpur Sikri situated is become the major challenges. Thus, the achievement of supplying water to the palatial complexes is solved by placing inhabitable water buildings within the two supply system for raising water. The process of raising the water can be either mechanical or laborious inside the enclosed well and water rising houses. The studies analyzes and abstract the water supply forms, patterns and flows in 3-dimensional shapes through the actions of evaporative cooling and wind-induced ventilation under arid climates. Through the abstraction analytical and descriptive relational morphology of the spatial configurations, the studies can suggest the idealized spatial system that can be used in urban design and complexes which later became a methodological and abstraction tool of sustainability to suit the modern contemporary world.Keywords: heritage site, Islamic vernacular architecture, water features, morphology, urban design
Procedia PDF Downloads 37519 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 4918 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms
Authors: Abdul Rehman, Bo Liu
Abstract:
Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization
Procedia PDF Downloads 22517 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World
Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber
Abstract:
Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.Keywords: semantic segmentation, urban environment, deep learning, urban building, classification
Procedia PDF Downloads 19116 Saudi State Arabia’s Struggle for a Post-Rentier Regional Order
Authors: Omair Anas
Abstract:
The Persian Gulf has been in turmoil for a long time since the colonial administration has handed over the role to the small and weak kings and emirs who were assured of protection in return of many economic and security promises to them. The regional order, Saudi Arabia evolved was a rentier regional order secured by an expansion of rentier economy and taking responsibility for much of the expenses of the regional order on behalf of relatively poor countries. The two oil booms helped the Saudi state to expand the 'rentier order' driven stability and bring the countries like Egypt, Jordan, Syria, and Palestine under its tutelage. The disruptive misadventure, however, came with Iran's proclamation of the Islamic Revolution in 1979 which it wanted to be exported to its 'un-Islamic and American puppet' Arab neighbours. For Saudi Arabia, even the challenge presented by the socialist-nationalist Arab dictators like Gamal Abdul Nasser and Hafez Al-Assad was not that much threatening to the Saudi Arabia’s then-defensive realism. In the Arab uprisings, the Gulf monarchies saw a wave of insecurity and Iran found it an opportune time to complete the revolutionary process it could not complete after 1979. An alliance of convenience and ideology between Iran and Islamist groups had the real potential to challenge both Saudi Arabia’s own security and its leadership in the region. The disruptive threat appeared at a time when the Saudi state had already sensed an impending crisis originating from the shifts in the energy markets. Low energy prices, declining global demands, and huge investments in alternative energy resources required Saudi Arabia to rationalize its economy according to changing the global political economy. The domestic Saudi reforms remained gradual until the death of King Abdullah in 2015. What is happening now in the region, the Qatar crisis, the Lebanon crisis and the Saudi-Iranian proxy war in Iraq, Syria, and Yemen has combined three immediate objectives, rationalising Saudi economy and most importantly, the resetting the Saudi royal power for Saudi Arabia’s longest-serving future King Mohammad bin Salman. The Saudi King perhaps has no time to wait and watch the power vacuum appearing because of Iran’s expansionist foreign policy. The Saudis appear to be employing an offensive realism by advancing a pro-active regional policy to counter Iran’s threatening influence amid disappearing Western security from the region. As the Syrian civil war is coming to a compromised end with ceding much ground to Iran-controlled militias, Hezbollah and Al-Hashad, the Saudi state has lost much ground in these years and the threat from Iranian proxies is more than a reality, more clearly in Bahrain, Iraq, Syria, and Yemen. This paper attempts to analyse the changing Saudi behaviour in the region, which, the author understands, is shaped by an offensive-realist approach towards finding a favourable security environment for the Saudi-led regional order, a post-rentier order perhaps.Keywords: terrorism, Saudi Arabia, Rentier State, gulf crisis
Procedia PDF Downloads 136