Search results for: Oka Aryawan I. Gede Made
617 Characterization of Anisotropic Deformation in Sandstones Using Micro-Computed Tomography Technique
Authors: Seyed Mehdi Seyed Alizadeh, Christoph Arns, Shane Latham
Abstract:
Geomechanical characterization of rocks in detail and its possible implications on flow properties is an important aspect of reservoir characterization workflow. In order to gain more understanding of the microstructure evolution of reservoir rocks under stress a series of axisymmetric triaxial tests were performed on two different analogue rock samples. In-situ compression tests were coupled with high resolution micro-Computed Tomography to elucidate the changes in the pore/grain network of the rocks under pressurized conditions. Two outcrop sandstones were chosen in the current study representing a various cementation status of well-consolidated and weakly-consolidated granular system respectively. High resolution images were acquired while the rocks deformed in a purpose-built compression cell. A detailed analysis of the 3D images in each series of step-wise compression tests (up to the failure point) was conducted which includes the registration of the deformed specimen images with the reference pristine dry rock image. Digital Image Correlation (DIC) technique based on the intensity of the registered 3D subsets and particle tracking are utilized to map the displacement fields in each sample. The results suggest the complex architecture of the localized shear zone in well-cemented Bentheimer sandstone whereas for the weakly-consolidated Castlegate sandstone no discernible shear band could be observed even after macroscopic failure. Post-mortem imaging a sister plug from the friable rock upon undergoing continuous compression reveals signs of a shear band pattern. This suggests that for friable sandstones at small scales loading mode may affect the pattern of deformation. Prior to mechanical failure, the continuum digital image correlation approach can reasonably capture the kinematics of deformation. As failure occurs, however, discrete image correlation (i.e. particle tracking) reveals superiority in both tracking the grains as well as quantifying their kinematics (in terms of translations/rotations) with respect to any stage of compaction. An attempt was made to quantify the displacement field in compression using continuum Digital Image Correlation which is based on the reference and secondary image intensity correlation. Such approach has only been previously applied to unconsolidated granular systems under pressure. We are applying this technique to sandstones with various degrees of consolidation. Such element of novelty will set the results of this study apart from previous attempts to characterize the deformation pattern in consolidated sands.Keywords: deformation mechanism, displacement field, shear behavior, triaxial compression, X-ray micro-CT
Procedia PDF Downloads 189616 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases
Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar
Abstract:
Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning
Procedia PDF Downloads 119615 Gender Gap in Returns to Social Entrepreneurship
Authors: Saul Estrin, Ute Stephan, Suncica Vujic
Abstract:
Background and research question: Gender differences in pay are present at all organisational levels, including at the very top. One possible way for women to circumvent organizational norms and discrimination is to engage in entrepreneurship because, as CEOs of their own organizations, entrepreneurs largely determine their own pay. While commercial entrepreneurship plays an important role in job creation and economic growth, social entrepreneurship has come to prominence because of its promise of addressing societal challenges such as poverty, social exclusion, or environmental degradation through market-based rather than state-sponsored activities. This opens the research question whether social entrepreneurship might be a form of entrepreneurship in which the pay of men and women is the same, or at least more similar; that is to say there is little or no gender pay gap. If the gender gap in pay persists also at the top of social enterprises, what are the factors, which might explain these differences? Methodology: The Oaxaca-Blinder Decomposition (OBD) is the standard approach of decomposing the gender pay gap based on the linear regression model. The OBD divides the gender pay gap into the ‘explained’ part due to differences in labour market characteristics (education, work experience, tenure, etc.), and the ‘unexplained’ part due to differences in the returns to those characteristics. The latter part is often interpreted as ‘discrimination’. There are two issues with this approach. (i) In many countries there is a notable convergence in labour market characteristics across genders; hence the OBD method is no longer revealing, since the largest portion of the gap remains ‘unexplained’. (ii) Adding covariates to a base model sequentially either to test a particular coefficient’s ‘robustness’ or to account for the ‘effects’ on this coefficient of adding covariates might be problematic, due to sequence-sensitivity when added covariates are correlated. Gelbach’s decomposition (GD) addresses latter by using the omitted variables bias formula, which constructs a conditional decomposition thus accounting for sequence-sensitivity when added covariates are correlated. We use GD to decompose the differences in gaps of pay (annual and hourly salary), size of the organisation (revenues), effort (weekly hours of work), and sources of finances (fees and sales, grants and donations, microfinance and loans, and investors’ capital) between men and women leading social enterprises. Database: Our empirical work is made possible by our collection of a unique dataset using respondent driven sampling (RDS) methods to address the problem that there is as yet no information on the underlying population of social entrepreneurs. The countries that we focus on are the United Kingdom, Spain, Romania and Hungary. Findings and recommendations: We confirm the existence of a gender pay gap between men and women leading social enterprises. This gap can be explained by differences in the accumulation of human capital, psychological and social factors, as well as cross-country differences. The results of this study contribute to a more rounded perspective, highlighting that although social entrepreneurship may be a highly satisfying occupation, it also perpetuates gender pay inequalities.Keywords: Gelbach’s decomposition, gender gap, returns to social entrepreneurship, values and preferences
Procedia PDF Downloads 244614 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 221613 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology
Authors: Hemendra Singh Rathod
Abstract:
Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.Keywords: frequency control, grid stability, li-ion battery storage, smart grid
Procedia PDF Downloads 150612 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78611 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers
Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin
Abstract:
Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.Keywords: anxiety, emotional valence, childhood, lexical access
Procedia PDF Downloads 288610 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification
Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti
Abstract:
Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.Keywords: fluvial auto-classification concept, mapping, geomorphology, river
Procedia PDF Downloads 367609 Global Digital Peer-to-Peer (P2P) Lending Platform Empowering Rural India: Determinants of Funding
Authors: Ankur Mehra, M. V. Shivaani
Abstract:
With increasing digitization, the world is coming closer, not only in terms of informational flow but also in terms of capital flows. And micro-finance institutions (MFIs) have perfectly leveraged this digital world by resorting to the innovative digital social peer-to-peer (P2P) lending platforms, such as, Kiva. These digital P2P platforms bring together micro-borrowers and lenders from across the world. The main objective of this study is to understand the funding preferences of social investors primarily from developed countries (such as US, UK, Australia), lending money to borrowers from rural India at zero interest rates through Kiva. Further, the objective of this study is to increase awareness about such a platform among various MFIs engaged in providing micro-loans to those in need. The sample comprises of India based micro-loan applications posted by various MFIs on Kiva lending platform over the period Sept 2012-March 2016. Out of 7,359 loans, 256 loans failed to get funded by social investors. On an average a micro-loan with 30 days to expiry gets fully funded in 7,593 minutes or 5.27 days. 62% of the loans raised on Kiva are related to livelihood, 32.5% of the loans are for funding basic necessities and balance 5.5% loans are for funding education. 47% of the loan applications have more than one borrower; while, currency exchange risk is on the social lenders for 45% of the loans. Controlling for the loan amount and loan tenure, the analyses suggest that those loan applications where the number of borrowers is more than one have a lower chance of getting funded as compared to the loan applications made by a sole borrower. Such group applications also take more time to get funded. Further, loan application by a solo woman not only has a higher chance of getting funded but as such get funded faster. The results also suggest that those loan applications which are supported by an MFI that has a religious affiliation, not only have a lower chance of getting funded, but also take longer to get funded as compared to the loan applications posted by secular MFIs. The results do not support cross-border currency risk to be a factor in explaining the determinants of loan funding. Finally, analyses suggest that loans raised for the purpose of earning livelihood and education have a higher chance of getting funded and such loans get funded faster as compared to the loans applied for purposes related to basic necessities such a clothing, housing, food, health, and personal use. The results are robust to controls for ‘MFI dummy’ and ‘year dummy’. The key implication from this study is that global social investors tend to develop an emotional connect with single woman borrowers and consequently they get funded faster Hence, MFIs should look for alternative ways for funding loans whose purpose is to meet basic needs; while, more loans related to livelihood and education should be raised via digital platforms.Keywords: P2P lending, social investing, fintech, financial inclusion
Procedia PDF Downloads 143608 Verification of Geophysical Investigation during Subsea Tunnelling in Qatar
Authors: Gary Peach, Furqan Hameed
Abstract:
Musaimeer outfall tunnel is one of the longest storm water tunnels in the world, with a total length of 10.15 km. The tunnel will accommodate surface and rain water received from the drainage networks from 270 km of urban areas in southern Doha with a pumping capacity of 19.7m³/sec. The tunnel is excavated by Tunnel Boring Machine (TBM) through Rus Formation, Midra Shales, and Simsima Limestone. Water inflows at high pressure, complex mixed ground, and weaker ground strata prone to karstification with the presence of vertical and lateral fractures connected to the sea bed were also encountered during mining. In addition to pre-tender geotechnical investigations, the Contractor carried out a supplementary offshore geophysical investigation in order to fine-tune the existing results of geophysical and geotechnical investigations. Electric resistivity tomography (ERT) and Seismic Reflection survey was carried out. Offshore geophysical survey was performed, and interpretations of rock mass conditions were made to provide an overall picture of underground conditions along the tunnel alignment. This allowed the critical tunnelling area and cutter head intervention to be planned accordingly. Karstification was monitored with a non-intrusive radar system facility installed on the TBM. The Boring Electric Ahead Monitoring(BEAM) was installed at the cutter head and was able to predict the rock mass up to 3 tunnel diameters ahead of the cutter head. BEAM system was provided with an online system for real time monitoring of rock mass condition and then correlated with the rock mass conditions predicted during the interpretation phase of offshore geophysical surveys. The further correlation was carried by Samples of the rock mass taken from tunnel face inspections and excavated material produced by the TBM. The BEAM data was continuously monitored to check the variations in resistivity and percentage frequency effect (PFE) of the ground. This system provided information about rock mass condition, potential karst risk, and potential of water inflow. BEAM system was found to be more than 50% accurate in picking up the difficult ground conditions and faults as predicted in the geotechnical interpretative report before the start of tunnelling operations. Upon completion of the project, it was concluded that the combined use of different geophysical investigation results can make the execution stage be carried out in a more confident way with the less geotechnical risk involved. The approach used for the prediction of rock mass condition in Geotechnical Interpretative Report (GIR) and Geophysical Reflection and electric resistivity tomography survey (ERT) Geophysical Reflection surveys were concluded to be reliable as the same rock mass conditions were encountered during tunnelling operations.Keywords: tunnel boring machine (TBM), subsea, karstification, seismic reflection survey
Procedia PDF Downloads 244607 COVID-19 Laws and Policy: The Use of Policy Surveillance For Better Legal Preparedness
Authors: Francesca Nardi, Kashish Aneja, Katherine Ginsbach
Abstract:
The COVID-19 pandemic has demonstrated both a need for evidence-based and rights-based public health policy and how challenging it can be to make effective decisions with limited information, evidence, and data. The O’Neill Institute, in conjunction with several partners, has been working since the beginning of the pandemic to collect, analyze, and distribute critical data on public health policies enacted in response to COVID-19 around the world in the COVID-19 Law Lab. Well-designed laws and policies can help build strong health systems, implement necessary measures to combat viral transmission, enforce actions that promote public health and safety for everyone, and on the individual level have a direct impact on health outcomes. Poorly designed laws and policies, on the other hand, can fail to achieve the intended results and/or obstruct the realization of fundamental human rights, further disease spread, or cause unintended collateral harms. When done properly, laws can provide the foundation that brings clarity to complexity, embrace nuance, and identifies gaps of uncertainty. However, laws can also shape the societal factors that make disease possible. Law is inseparable from the rest of society, and COVID-19 has exposed just how much laws and policies intersects all facets of society. In the COVID-19 context, evidence-based and well-informed law and policy decisions—made at the right time and in the right place—can and have meant the difference between life or death for many. Having a solid evidentiary base of legal information can promote the understanding of what works well and where, and it can drive resources and action to where they are needed most. We know that legal mechanisms can enable nations to reduce inequities and prepare for emerging threats, like novel pathogens that result in deadly disease outbreaks or antibiotic resistance. The collection and analysis of data on these legal mechanisms is a critical step towards ensuring that legal interventions and legal landscapes are effectively incorporated into more traditional kinds of health science data analyses. The COVID-19 Law Labs see a unique opportunity to collect and analyze this kind of non-traditional data to inform policy using laws and policies from across the globe and across diseases. This global view is critical to assessing the efficacy of policies in a wide range of cultural, economic, and demographic circumstances. The COVID-19 Law Lab is not just a collection of legal texts relating to COVID-19; it is a dataset of concise and actionable legal information that can be used by health researchers, social scientists, academics, human rights advocates, law and policymakers, government decision-makers, and others for cross-disciplinary quantitative and qualitative analysis to identify best practices from this outbreak, and previous ones, to be better prepared for potential future public health events.Keywords: public health law, surveillance, policy, legal, data
Procedia PDF Downloads 141606 Application of 2D Electrical Resistivity Tomographic Imaging Technique to Study Climate Induced Landslide and Slope Stability through the Analysis of Factor of Safety: A Case Study in Ooty Area, Tamil Nadu, India
Authors: S. Maniruzzaman, N. Ramanujam, Qazi Akhter Rasool, Swapan Kumar Biswas, P. Prasad, Chandrakanta Ojha
Abstract:
Landslide is one of the major natural disasters in South Asian countries. Applying 2D Electrical Resistivity Tomographic Imaging estimation of geometry, thickness, and depth of failure zone of the landslide can be made. Landslide is a pertinent problem in Nilgris plateau next to Himalaya. Nilgris range consists of hard Archean metamorphic rocks. Intense weathering prevailed during the Pre-Cambrian time had deformed the rocks up to 45m depth. The landslides are dominant in the southern and eastern part of plateau of is comparatively smaller than the northern drainage basins, as it has low density of drainage; coarse texture permitted the more of infiltration of rainwater, whereas in the northern part of the plateau entombed with high density of drainage pattern and fine texture with less infiltration than run off, and low to the susceptible to landslide. To get comprehensive information about the landslide zone 2D Electrical Resistivity Tomographic imaging study with CRM 500 Resistivity meter are used in Coonoor– Mettupalyam sector of Nilgiris plateau. To calculate Factor of Safety the infinite slope model of Brunsden and Prior is used. Factor of Safety can be expressed (FS) as the ratio of resisting forces to disturbing forces. If FS < 1 disturbing forces are larger than resisting forces and failure may occur. The geotechnical parameters of soil samples are calculated on the basis upon the apparent resistivity values for litho units of measured from 2D ERT image of the landslide zone. Relationship between friction angles for various soil properties is established by simple regression analysis from apparent resistivity data. Increase of water content in slide zone reduces the effectiveness of the shearing resistance and increase the sliding movement. Time-lapse resistivity changes to slope failure is determined through geophysical Factor of Safety which depends on resistivity and site topography. This ERT technique infers soil property at variable depths in wider areas. This approach to retrieve the soil property and overcomes the limit of the point of information provided by rain gauges and porous probes. Monitoring of slope stability without altering soil structure through the ERT technique is non-invasive with low cost. In landslide prone area an automated Electrical Resistivity Tomographic Imaging system should be installed permanently with electrode networks to monitor the hydraulic precursors to monitor landslide movement.Keywords: 2D ERT, landslide, safety factor, slope stability
Procedia PDF Downloads 317605 Current Deflecting Wall: A Promising Structure for Minimising Siltation in Semi-Enclosed Docks
Authors: A. A. Purohit, A. Basu, K. A. Chavan, M. D. Kudale
Abstract:
Many estuarine harbours in the world are facing the problem of siltation in docks, channel entrances, etc. The harbours in India are not an exception and require maintenance dredging to achieve navigable depths for keeping them operable. Hence, dredging is inevitable and is a costly affair. The heavy siltation in docks in well mixed tide dominated estuaries is mainly due to settlement of cohesive sediments in suspension. As such there is a need to have a permanent solution for minimising the siltation in such docks to alter the hydrodynamic flow field responsible for siltation by constructing structures outside the dock. One of such docks on the west coast of India, wherein siltation of about 2.5-3 m/annum prevails, was considered to understand the hydrodynamic flow field responsible for siltation. The dock is situated in such a region where macro type of semi-diurnal tide (range of about 5m) prevails. In order to change the flow field responsible for siltation inside the dock, suitability of Current Deflecting Wall (CDW) outside the dock was studied, which will minimise the sediment exchange rate and siltation in the dock. The well calibrated physical tidal model was used to understand the flow field during various phases of tide for the existing dock in Mumbai harbour. At the harbour entrance where the tidal flux exchanges in/out of the dock, measurements on water level and current were made to estimate the sediment transport capacity. The distorted scaled model (1:400 (H) & 1:80 (V)) of Mumbai area was used to study the tidal flow phenomenon, wherein tides are generated by automatic tide generator. Hydraulic model studies carried out under the existing condition (without CDW) reveal that, during initial hours of flood tide, flow hugs the docks breakwater and part of flow which enters the dock forms number of eddies of varying sizes inside the basin, while remaining part of flow bypasses the entrance of dock. During ebb, flow direction reverses, and part of the flow re-enters the dock from outside and creates eddies at its entrance. These eddies do not allow water/sediment-mass to come out and result in settlement of sediments in dock both due to eddies and more retention of sediment. At latter hours, current strength outside the dock entrance reduces and allows the water-mass of dock to come out. In order to improve flow field inside the dockyard, two CDWs of length 300 m and 40 m were proposed outside the dock breakwater and inline to Pier-wall at dock entrance. Model studies reveal that, during flood, major flow gets deflected away from the entrance and no eddies are formed inside the dock, while during ebb flow does not re-enter the dock, and sediment flux immediately starts emptying it during initial hours of ebb. This reduces not only the entry of sediment in dock by about 40% but also the deposition by about 42% due to less retention. Thus, CDW is a promising solution to significantly reduce siltation in dock.Keywords: current deflecting wall, eddies, hydraulic model, macro tide, siltation
Procedia PDF Downloads 298604 Using the Clinical Decision Support Platform, Dem DX, to Assess the ‘Urgent Community Care Team’s Notes Regarding Clinical Assessment, Management, and Healthcare Outcomes
Abstract:
Background: Heywood, Middleton & Rochdale Urgent Community Care Team (UCCT)1 is a great example of using a multidisciplinary team to cope with demand. The service reduces unnecessary admissions to hospitals and ensures that patients can leave the hospital quicker by making care more readily available within the community and patient’s homes. The team comprises nurses, community practitioners, and allied health professions, including physiotherapy, occupational therapy, pharmacy, and GPs. The main challenge for a team with a range of experiences and skill sets is to maintain consistency of care, which technology can help address. Allied healthcare professionals (HCPs) are often used in expanded roles with duties mainly involving patient consultations and decision making to ease pressure on doctors. The Clinical Reasoning Platform (CRP) Dem Dx is used to support new as well as experienced professionals in the decision making process. By guiding HCPs through diagnosing patients from an expansive directory of differential diagnoses, patients can receive quality care in the community. Actions on the platform are determined using NICE guidelines along with local guidance influencing the assessment and management of a patient. Objective: To compare the clinical assessment, decisions, and actions taken by the UCCT multidisciplinary team in the community and Dem Dx, using retrospective clinical cases. Methodology: Dem Dx was used to analyse 192 anonymised cases provided by the HMR UCCT. The team’s performance was compared with Dem Dx regarding the quality of the documentation of the clinical assessment and the next steps on the patient’s journey, including the initial management, actions, and any onward referrals made. The cases were audited by two medical doctors. Results: The study found that the actions outlined by the Dem Dx platform were appropriate in almost 87% of cases. When in a direct comparison between DemDX and the actions taken by the clinical team, it was found that the platform was suitable 83% (p<0.001) of the time and could lead to a potential improvement of 66% in the assessment and management of cases. Dem Dx also served to highlight the importance of comprehensive and high quality clinical documentation. The quality of documentation of cases by UCCT can be improved to provide a detailed account of the assessment and management process. By providing step-by-step guidance and documentation at every stage, Dem Dx may ensure that legal accountability has been fulfilled. Conclusion: With the ever expanding workforce in the NHS, technology has become a key component in driving healthcare outcomes. To improve healthcare provision and clinical reasoning, a decision support platform can be integrated into HCPs’ clinical practice. Potential assistance with clinical assessments, the most appropriate next step and actions in a patient’s care, and improvements in the documentation was highlighted by this retrospective study. A further study has been planned to ascertain the effectiveness of improving outcomes using the clinical reasoning platform within the clinical setting by clinicians.Keywords: allied health professional, assessment, clinical reasoning, clinical records, clinical decision-making, ocumentation
Procedia PDF Downloads 164603 Compositional Influence in the Photovoltaic Properties of Dual Ion Beam Sputtered Cu₂ZnSn(S,Se)₄ Thin Films
Authors: Brajendra S. Sengar, Vivek Garg, Gaurav Siddharth, Nisheka Anadkat, Amitesh Kumar, Shaibal Mukherjee
Abstract:
The optimal band gap (~ 1 to 1.5 eV) and high absorption coefficient ~104 cm⁻¹ has made Cu₂ZnSn(S,Se)₄ (CZTSSe) films as one of the most promising absorber materials in thin-film photovoltaics. Additionally, CZTSSe consists of elements that are abundant and non-toxic, makes it even more favourable. The CZTSSe thin films are grown at 100 to 500ᵒC substrate temperature (Tsub) on Soda lime glass (SLG) substrate by Elettrorava dual ion beam sputtering (DIBS) system by utilizing a target at 2.43x10⁻⁴ mbar working pressure with RF power of 45 W in argon ambient. The chemical composition, depth profiling, structural properties and optical properties of these CZTSSe thin films prepared on SLG were examined by energy dispersive X-ray spectroscopy (EDX, Oxford Instruments), Hiden secondary ion mass spectroscopy (SIMS) workstation with oxygen ion gun of energy up to 5 keV, X-ray diffraction (XRD) (Rigaku Cu Kα radiation, λ=.154nm) and Spectroscopic Ellipsometry (SE, M-2000D from J. A. Woollam Co., Inc). It is observed that from that, the thin films deposited at Tsub=200 and 300°C show Cu-poor and Zn-rich states (i.e., Cu/(Zn + Sn) < 1 and Zn/Sn > 1), which is not the case for films grown at other Tsub. It has been reported that the CZTSSe thin films with the highest efficiency are typically at Cu-poor and Zn-rich states. The values of band gap in the fundamental absorption region of CZTSSe are found to be in the range of 1.23-1.70 eV depending upon the Cu/(Zn+Sn) ratio. It is also observed that there is a decline in optical band gap with the increase in Cu/(Zn+Sn) ratio (evaluated from EDX measurement). Cu-poor films are found to have higher optical band gap than Cu-rich films. The decrease in the band gap with the increase in Cu content in case of CZTSSe films may be attributed to changes in the extent of p-d hybridization between Cu d-levels and (S, Se) p-levels. CZTSSe thin films with Cu/(Zn+Sn) ratio in the range 0.86–1.5 have been successfully deposited using DIBS. Optical band gap of the films is found to vary from 1.23 to 1.70 eV based on Cu/(Zn+Sn) ratio. CZTSe films with Cu/ (Zn+Sn) ratio of .86 are found to have optical band gap close to the ideal band gap (1.49 eV) for highest theoretical conversion efficiency. Thus by tailoring the value of Cu/(Zn+Sn), CZTSSe thin films with the desired band gap could be obtained. Acknowledgment: We are thankful to DIBS, EDX, and XRD facility equipped at Sophisticated Instrument Centre (SIC) at IIT Indore. The authors B. S. S and A. K. acknowledge CSIR, and V. G. acknowledges UGC, India for their fellowships. B. S. S is thankful to DST and IUSSTF for BASE Internship Award. Prof. Shaibal Mukherjee is thankful to DST and IUSSTF for BASE Fellowship and MEITY YFRF award. This work is partially supported by DAE BRNS, DST CERI, and DST-RFBR Project under India-Russia Programme of Cooperation in Science and Technology. We are thankful to Mukul Gupta for SIMS facility equipped at UGC-DAE Indore.Keywords: CZTSSe, DIBS, EDX, solar cell
Procedia PDF Downloads 250602 Effect of Low to Moderate Altitude on Football Performance: An Analysis of Thirteen Seasons in the South African Premier Soccer League
Authors: Khatija Bahdur, Duane Dell’Oca
Abstract:
There is limited information on how altitude impacts performance in a team sport. Most altitude research in football has been conducted at high elevation ( > 2500m), resulting in a chasm of understanding whether low to moderate altitude affects performance. The South African Premier Soccer League (PSL) fixtures entail matches played at altitudes from sea level to 1700m above mean sea level. Despite coaches highlighting the effect of altitude on performance outcomes in matches, further research is needed to establish whether altitude does impact match results. Greater insight into if and how altitude impacts performance in the PSL will assist coaches in deciding if and how to incorporate altitude in their planning. The purpose of this study is to fill in this gap through the use of a retrospective analysis of PSL matches. This quantitative study is based on a descriptive analysis of 181 PSL matches involving one team based at sea-level, taking place over a period of thirteen seasons. The following data were obtained: altitude at which the match was played, match result, the timing of goals, and timing of substitutions. The altitude was classified in 2 ways: inland ( > 500m) and coastal ( < 500m) and also further subdivided into narrower categories ( < 500m, 500-1000m, 1000-1300m; 1300-1500m, > 1500m). The analysis included a 2-sample t-test to determine differences in total goals scored and timing of goals for inland and coastal matches and the chi-square test to identify the significance of altitude on match results. The level of significance was set at the alpha level of 0.05. Match results are significantly affected by the altitude and level of altitude within inland teams most likely to win when playing at inland venues (p=0.000). The proportion of draws was slightly higher at the coast. At altitudes between 500-1000m, 1300-1500m, and 1500-1700m, a greater percentage of matches were won by coastal teams as opposed to draws. The timing of goals varied based on the team’s base altitude and the match elevation. The most significant differences were between 36-40 minutes (p=0.023), 41-45 minutes (p=0.000) and 50-65 minutes (p=0.000). When breaking down inland team’s matches to different altitude categories, greater differences were highlighted. Inland teams scored more goals per minute between 10-20 minute (p=0.009), 41-45 minutes (p=0.003) and 50-65 minutes (p=0.015). The total number of goals scored per match at different altitudes by a) inland teams (p=0.000), b) coastal teams (p=0.006). Coastal teams made significantly more substitutions when playing at altitude (p=0.034), although there were no significant differences when comparing the different altitude categories. The timing of all three changes, however, did vary significantly at the different altitudes. There were no significant differences in timing or number of substitutions for inland teams. Match results and timing of goals are influenced by altitude, with differences between the level of altitude also playing a role. The trends indicate that inland teams win more matches when playing at altitude against coastal teams, and they score more goals just prior to half-time and in the first quarter of the second half.Keywords: coastal teams, inland teams, timing of goals, results, substitutions
Procedia PDF Downloads 131601 Bioresorbable Medicament-Eluting Grommet Tube for Otitis Media with Effusion
Authors: Chee Wee Gan, Anthony Herr Cheun Ng, Yee Shan Wong, Subbu Venkatraman, Lynne Hsueh Yee Lim
Abstract:
Otitis media with effusion (OME) is the leading cause of hearing loss in children worldwide. Surgery to insert grommet tube into the eardrum is usually indicated for OME unresponsive to antimicrobial therapy. It is the most common surgery for children. However, current commercially available grommet tubes are non-bioresorbable, not drug-treated, with unpredictable duration of retention on the eardrum to ventilate middle ear. Their functionality is impaired when clogged or chronically infected, requiring additional surgery to remove/reinsert grommet tubes. We envisaged that a novel fully bioresorbable grommet tube with sustained antibiotic release technology could address these drawbacks. In this study, drug-loaded bioresorbable poly(L-lactide-co-ε-caprolactone)(PLC) copolymer grommet tubes were fabricated by microinjection moulding technique. In vitro drug release and degradation model of PLC tubes were studied. Antibacterial property was evaluated by incubating PLC tubes with P. aeruginosa broth. Surface morphology was analyzed using scanning electron microscopy. A preliminary animal study was conducted using guinea pigs as an in vivo model to evaluate PLC tubes with and without drug, with commercial Mini Shah grommet tube as comparison. Our in vitro data showed sustained drug release over 3 months. All PLC tubes revealed exponential degradation profiles over time. Modeling predicted loss of tube functionality in water to be approximately 14 weeks and 17 weeks for PLC with and without drug, respectively. Generally, PLC tubes had less bacteria adherence, which were attributed to the much smoother tube surfaces compared to Mini Shah. Antibiotic from PLC tube further made bacteria adherence on surface negligible. They showed neither inflammation nor otorrhea after 18 weeks post-insertion in the eardrums of guinea pigs, but had demonstrated severe degree of bioresorption. Histology confirmed the new PLC tubes were biocompatible. Analyses on the PLC tubes in the eardrums showed bioresorption profiles close to our in vitro degradation models. The bioresorbable antibiotic-loaded grommet tubes showed good predictability in functionality. The smooth surface and sustained release technology reduced the risk of tube infection. Tube functional duration of 18 weeks allowed sufficient ventilation period to treat OME. Our ongoing studies include modifying the surface properties with protein coating, optimizing the drug dosage in the tubes to enhance their performances, evaluating their functional outcome on hearing after full resoption of grommet tube and healing of eardrums, and developing animal model with OME to further validate our in vitro models.Keywords: bioresorbable polymer, drug release, grommet tube, guinea pigs, otitis media with effusion
Procedia PDF Downloads 450600 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 79599 Improving Working Memory in School Children through Chess Training
Authors: Veena Easvaradoss, Ebenezer Joseph, Sumathi Chandrasekaran, Sweta Jain, Aparna Anna Mathai, Senta Christy
Abstract:
Working memory refers to a cognitive processing space where information is received, managed, transformed, and briefly stored. It is an operational process of transforming information for the execution of cognitive tasks in different and new ways. Many class room activities require children to remember information and mentally manipulate it. While the impact of chess training on intelligence and academic performance has been unequivocally established, its impact on working memory needs to be studied. This study, funded by the Cognitive Science Research Initiative, Department of Science & Technology, Government of India, analyzed the effect of one-year chess training on the working memory of children. A pretest–posttest with control group design was used, with 52 children in the experimental group and 50 children in the control group. The sample was selected from children studying in school (grades 3 to 9), which included both the genders. The experimental group underwent weekly chess training for one year, while the control group was involved in extracurricular activities. Working memory was measured by two subtests of WISC-IV INDIA. The Digit Span Subtest involves recalling a list of numbers of increasing length presented orally in forward and in reverse order, and the Letter–Number Sequencing Subtest involves rearranging jumbled alphabets and numbers presented orally following a given rule. Both tasks require the child to receive and briefly store information, manipulate it, and present it in a changed format. The Children were trained using Winning Moves curriculum, audio- visual learning method, hands-on- chess training and recording the games using score sheets, analyze their mistakes, thereby increasing their Meta-Analytical abilities. They were also trained in Opening theory, Checkmating techniques, End-game theory and Tactical principles. Pre equivalence of means was established. Analysis revealed that the experimental group had significant gains in working memory compared to the control group. The present study clearly establishes a link between chess training and working memory. The transfer of chess training to the improvement of working memory could be attributed to the fact that while playing chess, children evaluate positions, visualize new positions in their mind, analyze the pros and cons of each move, and choose moves based on the information stored in their mind. If working-memory’s capacity could be expanded or made to function more efficiently, it could result in the improvement of executive functions as well as the scholastic performance of the child.Keywords: chess training, cognitive development, executive functions, school children, working memory
Procedia PDF Downloads 263598 Application of Multidimensional Model of Evaluating Organisational Performance in Moroccan Sport Clubs
Authors: Zineb Jibraili, Said Ouhadi, Jorge Arana
Abstract:
Introduction: Organizational performance is recognized by some theorists as one-dimensional concept, and by others as multidimensional. This concept, which is already difficult to apply in traditional companies, is even harder to identify, to measure and to manage when voluntary organizations are concerned, essentially because of the complexity of that form of organizations such as sport clubs who are characterized by the multiple goals and multiple constituencies. Indeed, the new culture of professionalization and modernization around organizational performance emerges new pressures from the state, sponsors, members and other stakeholders which have required these sport organizations to become more performance oriented, or to build their capacity in order to better manage their organizational performance. The evaluation of performance can be made by evaluating the input (e.g. available resources), throughput (e.g. processing of the input) and output (e.g. goals achieved) of the organization. In non-profit organizations (NPOs), questions of performance have become increasingly important in the world of practice. To our knowledge, most of studies used the same methods to evaluate the performance in NPSOs, but no recent study has proposed a club-specific model. Based on a review of the studies that specifically addressed the organizational performance (and effectiveness) of NPSOs at operational level, the present paper aims to provide a multidimensional framework in order to understand, analyse and measure organizational performance of sport clubs. This paper combines all dimensions founded in literature and chooses the most suited of them to our model that we will develop in Moroccan sport clubs case. Method: We propose to implicate our unified model of evaluating organizational performance that takes into account all the limitations found in the literature. On a sample of Moroccan sport clubs ‘Football, Basketball, Handball and Volleyball’, for this purpose we use a qualitative study. The sample of our study comprises data from sport clubs (football, basketball, handball, volleyball) participating on the first division of the professional football league over the period from 2011 to 2016. Each football club had to meet some specific criteria in order to be included in the sample: 1. Each club must have full financial data published in their annual financial statements, audited by an independent chartered accountant. 2. Each club must have sufficient data. Regarding their sport and financial performance. 3. Each club must have participated at least once in the 1st division of the professional football league. Result: The study showed that the dimensions that constitute the model exist in the field with some small modifications. The correlations between the different dimensions are positive. Discussion: The aim of this study is to test the unified model emerged from earlier and narrower approaches for Moroccan case. Using the input-throughput-output model for the sketch of efficiency, it was possible to identify and define five dimensions of organizational effectiveness applied to this field of study.Keywords: organisational performance, model multidimensional, evaluation organizational performance, sport clubs
Procedia PDF Downloads 323597 Effects of Soil Neutron Irradiation in Soil Carbon Neutron Gamma Analysis
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
The carbon sequestration question of modern times requires the development of an in-situ method of measuring soil carbon over large landmasses. Traditional chemical analytical methods used to evaluate large land areas require extensive soil sampling prior to processing for laboratory analysis; collectively, this is labor-intensive and time-consuming. An alternative method is to apply nuclear physics analysis, primarily in the form of pulsed fast-thermal neutron-gamma soil carbon analysis. This method is based on measuring the gamma-ray response that appears upon neutron irradiation of soil. Specific gamma lines with energies of 4.438 MeV appearing from neutron irradiation can be attributed to soil carbon nuclei. Based on measuring gamma line intensity, assessments of soil carbon concentration can be made. This method can be done directly in the field using a specially developed pulsed fast-thermal neutron-gamma system (PFTNA system). This system conducts in-situ analysis in a scanning mode coupled with GPS, which provides soil carbon concentration and distribution over large fields. The system has radiation shielding to minimize the dose rate (within radiation safety guidelines) for safe operator usage. Questions concerning the effect of neutron irradiation on soil health will be addressed. Information regarding absorbed neutron and gamma dose received by soil and its distribution with depth will be discussed in this study. This information was generated based on Monte-Carlo simulations (MCNP6.2 code) of neutron and gamma propagation in soil. Received data were used for the analysis of possible induced irradiation effects. The physical, chemical and biological effects of neutron soil irradiation were considered. From a physical aspect, we considered neutron (produced by the PFTNA system) induction of new isotopes and estimated the possibility of increasing the post-irradiation gamma background by comparisons to the natural background. An insignificant increase in gamma background appeared immediately after irradiation but returned to original values after several minutes due to the decay of short-lived new isotopes. From a chemical aspect, possible radiolysis of water (presented in soil) was considered. Based on stimulations of radiolysis of water, we concluded that the gamma dose rate used cannot produce gamma rays of notable rates. Possible effects of neutron irradiation (by the PFTNA system) on soil biota were also assessed experimentally. No notable changes were noted at the taxonomic level, nor was functional soil diversity affected. Our assessment suggested that the use of a PFTNA system with a neutron flux of 1e7 n/s for soil carbon analysis does not notably affect soil properties or soil health.Keywords: carbon sequestration, neutron gamma analysis, radiation effect on soil, Monte-Carlo simulation
Procedia PDF Downloads 142596 Good Functional Outcome after Late Surgical Treatment for Traumatic Rotator Cuff Tear, a Retrospective Cohort Study
Authors: Soheila Zhaeentan, Anders Von Heijne, Elisabet Hagert, André Stark, Björn Salomonsson
Abstract:
Recommended treatment for traumatic rotator cuff tear (TRCT) is surgery within a few weeks after injury if the diagnosis is made early, especially if a functional impairment of the shoulder exists. This may lead to the assumption that a poor outcome then can be expected in delayed surgical treatment, when the patient is diagnosed at a later stage. The aim of this study was to investigate if a surgical repair later than three months after injury may result in successful outcomes and patient satisfaction. There is evidence in literature that good results of treatment can be expected up to three months after the injury, but little is known of later treatment with cuff repair. 73 patients (75 shoulders), 58 males/17 females, mean age 59 (range 34-‐72), who had undergone surgical intervention for TRCT between January 1999 to December 2011 at our clinic, were included in this study. Patients were assessed by MRI investigation, clinical examination, Western Ontario Rotator Cuff index (WORC), Oxford Shoulder Score, Constant-‐Murley Score, EQ-‐5D and patient subjective satisfaction at follow-‐up. The patients treated surgically within three months ( < 12 weeks) after injury (39 cases) were compared with patients treated more than three months ( ≥ 12 weeks) after injury (36 cases). WORC was used as the primary outcome measure and the other variables as secondary. A senior consultant radiologist, blinded to patient category and clinical outcome, evaluated all MRI-‐images. Rotator cuff integrity, presence of arthritis, fatty degeneration and muscle atrophy was evaluated in all cases. The average follow-‐up time was 56 months (range 14-‐149) and the average time from injury to repair was 16 weeks (range 3-‐104). No statistically significant differences were found for any of the assessed parameters or scores between the two groups. The mean WORC score was 77 (early group, range 25-‐ 100 and late group, range 27-‐100) for both groups (p= 0.86), Constant-‐Murley Score (p= 0.91), Oxford Shoulder Score (p= 0.79), EQ-‐5D index (p= 0.86). Re-‐tear frequency was 24% for both groups, and the patients with re-‐tear reported less satisfaction with outcome. Discussion and conclusion: This study shows that surgical repair of TRCT performed later than three months after injury may result in good functional outcomes and patient satisfaction. However, this does not motivate an intentional delay in surgery when there is an indication for surgical repair as that delay may adversely affect the possibility to perform a repair. Our results show that surgeons may safely consider surgical repair even if a delay in diagnosis has occurred. A retrospective cohort study on 75 shoulders shows good functional result after traumatic rotator cuff tear (TRCT) treated surgically up to one year after the injury.Keywords: traumatic rotator cuff injury, time to surgery, surgical outcome, retrospective cohort study
Procedia PDF Downloads 223595 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety
Authors: Neeti Nayak, Khalid Duri
Abstract:
Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design
Procedia PDF Downloads 109594 Gender Construction in Contemporary Dystopian Fiction in Young Adult Literature: A South African Example
Authors: Johan Anker
Abstract:
The purpose of this paper is to discuss the nature of gender construction in modern dystopian fiction, the development of this genre in Young Adult Literature and reasons for the enormous appeal on the adolescent readers. A recent award winning South African text in this genre, The Mark by Edith Bullring (2014), will be used as example while also comparing this text to international bestsellers like Divergent (Roth:2011), The Hunger Games (Collins:2008) and others. Theoretical insights from critics and academics in the field of children’s literature, like Ames, Coats, Bradford, Booker, Basu, Green-Barteet, Hintz, McAlear, McCallum, Moylan, Ostry, Ryan, Stephens and Westerfield will be referred to and their insights used as part of the analysis of The Mark. The role of relevant and recurring themes in this genre, like global concerns, environmental destruction, liberty, self-determination, social and political critique, surveillance and repression by the state or other institutions will also be referred to. The paper will shortly refer to the history and emergence of dystopian literature as genre in adult and young adult literature as part of the long tradition since the publishing of Orwell’s 1984 and Huxley’s Brave New World. Different factors appeal to adolescent readers in the modern versions of this hybrid genre for young adults: teenage protagonists who are questioning the underlying values of a flawed society like an inhuman or tyrannical government, a growing understanding of the society around them, feelings of isolation and the dynamic of relationships. This unease leads to a growing sense of the potential to act against society (rebellion), and of their role as agents in a larger community and independent decision-making abilities. This awareness also leads to a growing sense of self (identity and agency) and the development of romantic relationships. The specific modern tendency of a female protagonist as leader in the rebellion against state and state apparatus, who gains in agency and independence in this rebellion, an important part of the identification with and construction of gender, while being part of the traditional coming-of-age young adult novel will be emphasized. A comparison between the traditional themes, structures and plots of young adult literature (YAL) with adult dystopian literature and those of recent dystopian YAL will be made while the hybrid nature of this genre and the 'sense of unease' but also of hope, as an essential part of youth literature, in the closure to these novels will be discussed. Important questions about the role of the didactic nature of these texts and the political issues and the importance of the formation of agency and identity for the young adult reader, as well as identification with the protagonists in this genre, are also part of this discussion of The Mark and other YAL novels.Keywords: agency, dystopian literature, gender construction, young adult literature
Procedia PDF Downloads 190593 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 101592 Enhancing Efficiency of Building through Translucent Concrete
Authors: Humaira Athar, Brajeshwar Singh
Abstract:
Generally, the brightness of the indoor environment of buildings is entirely maintained by the artificial lighting which has consumed a large amount of resources. It is reported that lighting consumes about 19% of the total generated electricity which accounts for about 30-40% of total energy consumption. One possible way is to reduce the lighting energy by exploiting sunlight either through the use of suitable devices or energy efficient materials like translucent concrete. Translucent concrete is one such architectural concrete which allows the passage of natural light as well as artificial light through it. Several attempts have been made on different aspects of translucent concrete such as light guiding materials (glass fibers, plastic fibers, cylinder etc.), concrete mix design and manufacturing methods for use as building elements. Concerns are, however, raised on various related issues such as poor compatibility between the optical fibers and cement paste, unaesthetic appearance due to disturbance occurred in the arrangement of fibers during vibration and high shrinkage in flowable concrete due to its high water/cement ratio. Need is felt to develop translucent concrete to meet the requirement of structural safety as OPC concrete with the maximized saving in energy towards the power of illumination and thermal load in buildings. Translucent concrete was produced using pre-treated plastic optical fibers (POF, 2mm dia.) and high slump white concrete. The concrete mix was proportioned in the ratio of 1:1.9:2.1 with a w/c ratio of 0.40. The POF was varied from 0.8-9 vol.%. The mechanical properties and light transmission of this concrete were determined. Thermal conductivity of samples was measured by a transient plate source technique. Daylight illumination was measured by a lux grid method as per BIS:SP-41. It was found that the compressive strength of translucent concrete increased with decreasing optical fiber content. An increase of ~28% in the compressive strength of concrete was noticed when fiber was pre-treated. FE-SEM images showed little-debonded zone between the fibers and cement paste which was well supported with pull-out bond strength test results (~187% improvement over untreated). The light transmission of concrete was in the range of 3-7% depending on fiber spacing (5-20 mm). The average daylight illuminance (~75 lux) was nearly equivalent to the criteria specified for illumination for circulation (80 lux). The thermal conductivity of translucent concrete was reduced by 28-40% with respect to plain concrete. The thermal load calculated by heat conduction equation was ~16% more than the plain concrete. Based on Design-Builder software, the total annual illumination energy load of a room using one side translucent concrete was 162.36 kW compared with the energy load of 249.75 kW for a room without concrete. The calculated energy saving on an account of the power of illumination was ~25%. A marginal improvement towards thermal comfort was also noticed. It is concluded that the translucent concrete has the advantages of the existing concrete (load bearing) with translucency and insulation characteristics. It saves a significant amount of energy by providing natural daylight instead of artificial power consumption of illumination.Keywords: energy saving, light transmission, microstructure, plastic optical fibers, translucent concrete
Procedia PDF Downloads 128591 Bioactive Substances-Loaded Water-in-Oil/Oil-in-Water Emulsions for Dietary Supplementation in the Elderly
Authors: Agnieszka Markowska-Radomska, Ewa Dluska
Abstract:
Maintaining a bioactive substances dense diet is important for the elderly, especially to prevent diseases and to support healthy ageing. Adequate bioactive substances intake can reduce the risk of developing chronic diseases (e.g. cardiovascular, osteoporosis, neurodegenerative syndromes, diseases of the oral cavity, gastrointestinal (GI) disorders, diabetes, and cancer). This can be achieved by introducing a comprehensive supplementation of components necessary for the proper functioning of the ageing body. The paper proposes the multiple emulsions of the W1/O/W2 (water-in-oil-in-water) type as carriers for effective co-encapsulation and co-delivery of bioactive substances in supplementation of the elderly. Multiple emulsions are complex structured systems ("drops in drops"). The functional structure of the W1/O/W2 emulsion enables (i) incorporation of one or more bioactive components (lipophilic and hydrophilic); (ii) enhancement of stability and bioavailability of encapsulated substances; (iii) prevention of interactions between substances, as well as with the external environment, delivery to a specific location; and (iv) release in a controlled manner. The multiple emulsions were prepared by a one-step method in the Couette-Taylor flow (CTF) contactor in a continuous manner. In general, a two-step emulsification process is used to obtain multiple emulsions. The paper contains a proposal of emulsion functionalization by introducing pH-responsive biopolymer—carboxymethylcellulose sodium salt (CMC-Na) to the external phase, which made it possible to achieve a release of components controlled by the pH of the gastrointestinal environment. The membrane phase of emulsions was soybean oil. The W1/O/W2 emulsions were evaluated for their characteristics (drops size/drop size distribution, volume packing fraction), encapsulation efficiency and stability during storage (to 30 days) at 4ºC and 25ºC. Also, the in vitro multi-substance co-release process were investigated in a simulated gastrointestinal environment (different pH and composition of release medium). Three groups of stable multiple emulsions were obtained: emulsions I with co-encapsulated vitamins B12, B6 and resveratrol; emulsions II with vitamin A and β-carotene; and emulsions III with vitamins C, E and D3. The substances were encapsulated in the appropriate emulsion phases depending on the solubility. For all emulsions, high encapsulation efficience (over 95%) and high volume packing fraction of internal droplets (0.54-0.76) were reached. In addition, due to the presence of a polymer (CMC-Na) with adhesive properties, high encapsulation stability during emulsions storage were achieved. The co-release study of encapsulated bioactive substances confirmed the possibility to modify the release profiles. It was found that the releasing process can be controlled through the composition, structure, physicochemical parameters of emulsions and pH of the release medium. The results showed that the obtained multiple emulsions might be used as potential liquid complex carriers for controlled/modified/site-specific co-delivery of bioactive substances in dietary supplementation in the elderly.Keywords: bioactive substance co-release, co-encapsulation, elderly supplementation, multiple emulsion
Procedia PDF Downloads 198590 Controlled Synthesis of Pt₃Sn-SnOx/C Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells
Authors: Dorottya Guban, Irina Borbath, Istvan Bakos, Peter Nemeth, Andras Tompos
Abstract:
One of the greatest challenges of the implementation of polymer electrolyte membrane fuel cells (PEMFCs) is to find active and durable electrocatalysts. The cell performance is always limited by the oxygen reduction reaction (ORR) on the cathode since it is at least 6 orders of magnitude slower than the hydrogen oxidation on the anode. Therefore high loading of Pt is required. Catalyst corrosion is also more significant on the cathode, especially in case of mobile applications, where rapid changes of loading have to be tolerated. Pt-Sn bulk alloys and SnO2-decorated Pt3Sn nanostructures are among the most studied bimetallic systems for fuel cell applications. Exclusive formation of supported Sn-Pt alloy phases with different Pt/Sn ratios can be achieved by using controlled surface reactions (CSRs) between hydrogen adsorbed on Pt sites and tetraethyl tin. In this contribution our results for commercial and a home-made 20 wt.% Pt/C catalysts modified by tin anchoring via CSRs are presented. The parent Pt/C catalysts were synthesized by modified NaBH4-assisted ethylene-glycol reduction method using ethanol as a solvent, which resulted either in dispersed and highly stable Pt nanoparticles or evenly distributed raspberry-like agglomerates according to the chosen synthesis parameters. The 20 wt.% Pt/C catalysts prepared that way showed improved electrocatalytic performance in the ORR and stability in comparison to the commercial 20 wt.% Pt/C catalysts. Then, in order to obtain Sn-Pt/C catalysts with Pt/Sn= 3 ratio, the Pt/C catalysts were modified with tetraethyl tin (SnEt4) using three and five consecutive tin anchoring periods. According to in situ XPS studies in case of catalysts with highly dispersed Pt nanoparticles, pre-treatment in hydrogen even at 170°C resulted in complete reduction of the ionic tin to Sn0. No evidence of the presence of SnO2 phase was found by means of the XRD and EDS analysis. These results demonstrate that the method of CSRs is a powerful tool to create Pt-Sn bimetallic nanoparticles exclusively, without tin deposition onto the carbon support. On the contrary, the XPS results revealed that the tin-modified catalysts with raspberry-like Pt agglomerates always contained a fraction of non-reducible tin oxide. At the same time, they showed increased activity and long-term stability in the ORR than Pt/C, which was assigned to the presence of SnO2 in close proximity/contact with Pt-Sn alloy phase. It has been demonstrated that the content and dispersion of the fcc Pt3Sn phase within the electrocatalysts can be controlled by tuning the reaction conditions of CSRs. The bimetallic catalysts displayed an outstanding performance in the ORR. The preparation of a highly dispersed 20Pt/C catalyst permits to decrease the Pt content without relevant decline in the electrocatalytic performance of the catalysts.Keywords: anode catalyst, cathode catalyst, controlled surface reactions, oxygen reduction reaction, PtSn/C electrocatalyst
Procedia PDF Downloads 235589 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts
Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira
Abstract:
In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design
Procedia PDF Downloads 115588 A Research on the Effect of Soil-Structure Interaction on the Dynamic Response of Symmetrical Reinforced Concrete Buildings
Authors: Adinew Gebremeskel Tizazu
Abstract:
The effect of soil-structure interaction on the dynamic response of reinforced concrete buildings of regular and symmetrical geometry are considered in this study. The structures are presumed to be generally embedded in a homogenous soil formation underlain by very stiff material or bedrock. The structure-foundation–soil system is excited at the base by an earthquake ground motion. The superstructure is idealized as a system with lumped masses concentrated at the floor levels, and coupled with the substructure. The substructure system, which comprises of the foundation and soil, is represented, and replaced by springs and dashpots. Frequency-dependent impedances of the foundation system are incorporated in the discrete model in terms of the springs and dashpots coefficients. The excitation applied to the model is field ground motions of actual earthquake records. Modal superposition principle is employed to transform the equations of motion in geometrical coordinates to modal coordinates. However, the modal equations remain coupled with respect to damping terms due to the difference in damping mechanisms of the superstructure and the soil. Hence, proportional damping for the coupled structural system may not be assumed. An iterative approach is adopted and programmed to solve the system of coupled equations of motion in modal coordinates to obtain the displacement responses of the system. Parametric studies for responses of building structures with regular and symmetric plans of different structural properties and heights are made for fixed and flexible base conditions, for different soil conditions encountered in Addis Ababa. The displacement, base shear and base overturning moments are used in the comparison of different types of structures for various foundation embedment depths, site conditions and height of structures. These values are compared against those of fixed base structure. The study shows that the flexible base structures, generally exhibit different responses from those structures with fixed base. Basically, the natural circular frequencies, the base shears and the inter-story displacements for the flexible base are less than those of the fixed base structures. This trend is particularly evident when the flexible soil has large thickness. In contrast, the trend becomes less predictable, when the thickness of the flexible soil decreases. Moreover, in the latter case, the iteration undulates significantly making the prediction difficult. This is attributed to the highly jagged nature of the impedance functions of frequencies for such formations. In this case, it is difficult to conclude whether the conventional fixed-base approach yields conservative design forces, as is the case for soil formations of large thickness.Keywords: effect of soil structure, dynamic response corroborated, the modal superposition principle, parametric studies
Procedia PDF Downloads 32