Search results for: running habits
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1110

Search results for: running habits

30 Environmental Impacts Assessment of Power Generation via Biomass Gasification Systems: Life Cycle Analysis (LCA) Approach for Tars Release

Authors: Grâce Chidikofan, François Pinta, A. Benoist, G. Volle, J. Valette

Abstract:

Statement of the Problem: biomass gasification systems may be relevant for decentralized power generation from recoverable agricultural and wood residues available in rural areas. In recent years, many systems have been implemented in all over the world as especially in Cambodgia, India. Although they have many positive effects, these systems can also affect the environment and human health. Indeed, during the process of biomass gasification, black wastewater containing tars are produced and generally discharged in the local environment either into the rivers or on soil. However, in most environmental assessment studies of biomass gasification systems, the impact of these releases are underestimated, due to the difficulty of identification of their chemical substances. This work deal with the analysis of the environmental impacts of tars from wood gasification in terms of human toxicity cancer effect, human toxicity non-cancer effect, and freshwater ecotoxicity. Methodology: A Life Cycle Assessment (LCA) approach was adopted. The inventory of tars chemicals substances was based on experimental data from a downdraft gasification system. The composition of six samples from two batches of raw materials: one batch made of tree wood species (oak+ plane tree +pine) at 25 % moisture content and the second batch made of oak at 11% moisture content. The tests were carried out for different gasifier load rates, respectively in the range 50-75% and 50-100%. To choose the environmental impacts assessment method, we compared the methods available in SIMAPRO tool (8.2.0) which are taking into account most of the chemical substances. The environmental impacts for 1kg of tars discharged were characterized by ILCD 2011+ method (V.1.08). Findings Experimental results revealed 38 important chemical substances in varying proportion from one test to another. Only 30 are characterized by ILCD 2011+ method, which is one of the best performing methods. The results show that wood species or moisture content have no significant impact on human toxicity noncancer effect (HTNCE) and freshwater ecotoxicity (FWE) for water release. For human toxicity cancer effect (HTCE), a small gap is observed between impact factors of the two batches, either 3.08E-7 CTUh/kg against 6.58E-7 CTUh/kg. On the other hand, it was found that the risk of negative effects is higher in case of tar release into water than on soil for all impact categories. Indeed, considering the set of samples, the average impact factor obtained for HTNCE varies respectively from 1.64 E-7 to 1.60E-8 CTUh/kg. For HTCE, the impact factor varies between 4.83E-07 CTUh/kg and 2.43E-08 CTUh/kg. The variability of those impact factors is relatively low for these two impact categories. Concerning FWE, the variability of impact factor is very high. It is 1.3E+03 CTUe/kg for tars release into water against 2.01E+01 CTUe/kg for tars release on soil. Statement concluding: The results of this study show that the environmental impacts of tars emission of biomass gasification systems can be consequent and it is important to investigate the ways to reduce them. For environmental research, these results represent an important step of a global environmental assessment of the studied systems. It could be used to better manage the wastewater containing tars to reduce as possible the impacts of numerous still running systems all over the world.

Keywords: biomass gasification, life cycle analysis, LCA, environmental impact, tars

Procedia PDF Downloads 250
29 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 66
28 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 141
27 Lessons Learnt from Industry: Achieving Net Gain Outcomes for Biodiversity

Authors: Julia Baker

Abstract:

Development plays a major role in stopping biodiversity loss. But the ‘silo species’ protection of legislation (where certain species are protected while many are not) means that development can be ‘legally compliant’ and result in biodiversity loss. ‘Net Gain’ (NG) policies can help overcome this by making it an absolute requirement that development causes no overall loss of biodiversity and brings a benefit. However, offsetting biodiversity losses in one location with gains elsewhere is controversial because people suspect ‘offsetting’ to be an easy way for developers to buy their way out of conservation requirements. Yet the good practice principles (GPP) of offsetting provide several advantages over existing legislation for protecting biodiversity from development. This presentation describes the learning from implementing NG approaches based on GPP. It regards major upgrades of the UK’s transport networks, which involved removing vegetation in order to construct and safely operate new infrastructure. While low-lying habitats were retained, trees and other habitats disrupting the running or safety of transport networks could not. Consequently, achieving NG within the transport corridor was not possible and offsetting was required. The first ‘lessons learnt’ were on obtaining a commitment from business leaders to go beyond legislative requirements and deliver NG, and on the institutional change necessary to embed GPP within daily operations. These issues can only be addressed when the challenges that biodiversity poses for business are overcome. These challenges included: biodiversity cannot be measured easily unlike other sustainability factors like carbon and water that have metrics for target-setting and measuring progress; and, the mindset that biodiversity costs money and does not generate cash in return, which is the opposite of carbon or waste for example, where people can see how ‘sustainability’ actions save money. The challenges were overcome by presenting the GPP of NG as a cost-efficient solution to specific, critical risks facing the business that also boost industry recognition, and by using government-issued NG metrics to develop business-specific toolkits charting their NG progress whilst ensuring that NG decision-making was based on rich ecological data. An institutional change was best achieved by supporting, mentoring and training sustainability/environmental managers for these ‘frontline’ staff to embed GPP within the business. The second learning was from implementing the GPP where business partnered with local governments, wildlife groups and land owners to support their priorities for nature conservation, and where these partners had a say in decisions about where and how best to achieve NG. From this inclusive approach, offsetting contributed towards conservation priorities when all collaborated to manage trade-offs between: -Delivering ecologically equivalent offsets or compensating for losses of one type of biodiversity by providing another. -Achieving NG locally to the development whilst contributing towards national conservation priorities through landscape-level planning. -Not just protecting the extent and condition of existing biodiversity but ‘doing more’. -The multi-sector collaborations identified practical, workable solutions to ‘in perpetuity’. But key was strengthening linkages between biodiversity measures implemented for development and conservation work undertaken by local organizations so that developers support NG initiatives that really count.

Keywords: biodiversity offsetting, development, nature conservation planning, net gain

Procedia PDF Downloads 168
26 The Importance of School Culture in Supporting Student Mental Health Following the COVID-19 Pandemic: Insights from a Qualitative Study

Authors: Rhiannon Barker, Gregory Hartwell, Matt Egan, Karen Lock

Abstract:

Background: Evidence suggests that mental health (MH) issues in children and young people (CYP) in the UK are on the rise. Of particular concern is data that indicates that the pandemic, together with the impact of school closures, have accentuated already pronounced inequalities; children from families on low incomes or from black and minority ethnic groups are reportedly more likely to have been adversely impacted. This study aimed to help identify specific support which may facilitate the building of a positive school climate and protect student mental health, particularly in the wake of school closures following the pandemic. It has important implications for integrated working between schools and statutory health services. Methods: The research comprised of three parts; scoping, case studies, and a stakeholder workshop to explore and consolidate results. The scoping phase included a literature review alongside interviews with a range of stakeholders from government, academia, and the third sector. Case studies were then conducted in two London state schools. Results: Our research identified how student MH was being impacted by a range of factors located at different system levels, both internal to the school and in the wider community. School climate, relating both to a shared system of beliefs and values, as well as broader factors including style of leadership, teaching, discipline, safety, and relationships -all played a role in the experience of school life and, consequently, the MH of both students and staff. Participants highlighted the importance of a whole school approach and ensuring that support for student MH was not separated from academic achievement, as well as the importance of identifying and applying universal measuring systems to establish levels of MH need. Our findings suggest that a school’s climate is influenced by the style and strength of its leadership, while this school climate - together with mechanisms put in place to respond to MH needs (both statutory and non-statutory) - plays a key role in supporting student MH. Implications: Schools in England have a responsibility to decide on the nature of MH support provided for their students, and there is no requirement for them to report centrally on the form this provision takes. The reality on the ground, as our study suggests, is that MH provision varies significantly between schools, particularly in relation to ‘lower’ levels of need which are not covered by statutory requirements. A valid concern may be that in the huge raft of possible options schools have to support CYP wellbeing, too much is left to chance. Work to support schools in rebuilding their cultures post-lockdowns must include the means to identify and promote appropriate tools and techniques to facilitate regular measurement of student MH. This will help establish both the scale of the problem and monitor the effectiveness of the response. A strong vision from a school’s leadership team that emphasises the importance of student wellbeing, running alongside (but not overshadowed by) academic attainment, should help shape a school climate to promote beneficial MH outcomes. The sector should also be provided with support to improve the consistency and efficacy of MH provision in schools across the country.

Keywords: mental health, schools, young people, whole-school culture

Procedia PDF Downloads 35
25 Smart and Active Package Integrating Printed Electronics

Authors: Joana Pimenta, Lorena Coelho, José Silva, Vanessa Miranda, Jorge Laranjeira, Rui Soares

Abstract:

In this paper, the results of R&D on an innovative food package for increased shelf-life are presented. SAP4MA aims at the development of a printed active device that enables smart packaging solutions for food preservation, targeting the extension of the shelf-life of the packed food through the controlled release of active natural antioxidant agents at the onset of the food degradation process. To do so, SAP4MA focuses on the development of active devices such as printed heaters and batteries/supercapacitors in a label format to be integrated on packaging lids during its injection molding process, promoting the passive release of natural antioxidants after the product is packed, during transportation and in the shelves, and actively when the end-user activates the package, just prior to consuming the product at home. When the active device present on the lid is activated, the release of the natural antioxidants embedded in the inner layer of the packaging lid in direct contact with the headspace atmosphere of the food package starts. This approach is based on the use of active functional coatings composed of nano encapsulated active agents (natural antioxidants species) in the prevention of the oxidation of lipid compounds in food by agents such as oxygen. Thus keeping the product quality during the shelf-life, not only when the user opens the packaging, but also during the period from food packaging up until the purchase by the consumer. The active systems that make up the printed smart label, heating circuit, and battery were developed using screen-printing technology. These systems must operate under the working conditions associated with this application. The printed heating circuit was studied using three different substrates and two different conductive inks. Inks were selected, taking into consideration that the printed circuits will be subjected to high pressures and temperatures during the injection molding process. The circuit must reach a homogeneous temperature of 40ºC in the entire area of the lid of the food tub, promoting a gradual and controlled release of the antioxidant agents. In addition, the circuit design involves a high level of study in order to guarantee maximum performance after the injection process and meet the specifications required by the control electronics component. Furthermore, to characterize the different heating circuits, the electrical resistance promoted by the conductive ink and the circuit design, as well as the thermal behavior of printed circuits on different substrates, were evaluated. In the injection molding process, the serpentine-shaped design developed for the heating circuit was able to resolve the issues connected to the injection point; in addition, the materials used in the support and printing had high mechanical resistance against the pressure and temperature inherent to the injection process. Acknowledgment: This research has been carried out within the Project “Smart and Active Packing for Margarine Product” (SAP4MA) running under the EURIPIDES Program being co-financed by COMPETE 2020 – the Operational Programme for Competitiveness and Internationalization and under Portugal 2020 through the European Regional Development Fund (ERDF).

Keywords: smart package, printed heat circuits, printed batteries, flexible and printed electronic

Procedia PDF Downloads 79
24 A Randomised Simulation Study to Assess the Impact of a Focussed Crew Resource Management Course on UK Medical Students

Authors: S. MacDougall-Davis, S. Wysling, R. Willmore

Abstract:

Background: The application of good non-technical skills, also known as crew resource management (CRM), is central to the delivery of safe, effective healthcare. The authors have been running remote trauma courses for over 10 years, primarily focussing on developing participants’ CRM in time-critical, high-stress clinical situations. The course has undergone an iterative process over the past 10 years. We employ a number of experiential learning techniques for improving CRM, including small group workshops, military command tasks, high fidelity simulations with reflective debriefs, and a ‘flipped classroom’, where participants are asked to create their own simulations and assess and debrief their colleagues’ CRM. We created a randomised simulation study to assess the impact of our course on UK medical students’ CRM, both at an individual and a teams level. Methods: Sixteen students took part. Four clinical scenarios were devised, designed to be of similar urgency and complexity. Professional moulage effects and experienced clinical actors were used to increase fidelity and to further simulate high-stress environments. Participants were block randomised into teams of 4; each team was randomly assigned to one pre-course simulation. They then underwent our 5 day remote trauma CRM course. Post-course, students were re-randomised into four new teams; each was randomly assigned to a post-course simulation. All simulations were videoed. The footage was reviewed by two independent CRM-trained assessors, who were blinded to the before/after the status of the simulations. Assessors used the internationally validated team emergency assessment measure (TEAM) to evaluate key areas of team performance, as well as a global outcome rating. Prior to the study, assessors had scored two unrelated scenarios using the same assessment tool, demonstrating 89% concordance. Participants also completed pre- and post-course questionnaires. Likert scales were used to rate individuals’ perceived NTS ability and their confidence to work in a team in time-critical, high-stress situations. Results: Following participation in the course, a significant improvement in CRM was observed in all areas of team performance. Furthermore, the global outcome rating for team performance was markedly improved (40-70%; mean 55%), thus demonstrating an impact at Level 4 of Kirkpatrick’s hierarchy. At an individual level, participants’ self-perceived CRM improved markedly after the course (35-70% absolute improvement; mean 55%), as did their confidence to work in a team in high-stress situations. Conclusion: Our study demonstrates that with a short, cost-effective course, using easily reproducible teaching sessions, it is possible to significantly improve participants’ CRM skills, both at an individual and, perhaps more importantly, at a teams level. The successful functioning of multi-disciplinary teams is vital in a healthcare setting, particularly in high-stress, time-critical situations. Good CRM is of paramount importance in these scenarios. The authors believe that these concepts should be introduced from the earliest stages of medical education, thus promoting a culture of effective CRM and embedding an early appreciation of the importance of these skills in enabling safe and effective healthcare.

Keywords: crew resource management, non-technical skills, training, simulation

Procedia PDF Downloads 110
23 Comparisons of Drop Jump and Countermovement Jump Performance for Male Basketball Players with and without Low-Dye Taping Application

Authors: Chung Yan Natalia Yeung, Man Kit Indy Ho, Kin Yu Stan Chan, Ho Pui Kipper Lam, Man Wah Genie Tong, Tze Chung Jim Luk

Abstract:

Excessive foot pronation is a well-known risk factor of knee and foot injuries such as patellofemoral pain, patellar and Achilles tendinopathy, and plantar fasciitis. Low-Dye taping (LDT) application is not uncommon for basketball players to control excessive foot pronation for pain control and injury prevention. The primary potential benefits of using LDT include providing additional supports to medial longitudinal arch and restricting the excessive midfoot and subtalar motion in weight-bearing activities such as running and landing. Meanwhile, restrictions provided by the rigid tape may also potentially limit functional joint movements and sports performance. Coaches and athletes need to weigh the potential benefits and harmful effects before making a decision if applying LDT technique is worthwhile or not. However, the influence of using LDT on basketball-related performance such as explosive and reactive strength is not well understood. Therefore, the purpose of this study was to investigate the change of drop jump (DJ) and countermovement jump (CMJ) performance before and after LDT application for collegiate male basketball players. In this within-subject crossover study, 12 healthy male basketball players (age: 21.7 ± 2.5 years) with at least 3-year regular basketball training experience were recruited. Navicular drop (ND) test was adopted as the screening and only those with excessive pronation (ND ≥ 10mm) were included. Participants with recent lower limb injury history were excluded. Recruited subjects were required to perform both ND, DJ (on a platform of 40cm height) and CMJ (without arms swing) tests in series during taped and non-taped conditions in the counterbalanced order. Reactive strength index (RSI) was calculated by using the flight time divided by the ground contact time measured. For DJ and CMJ tests, the best of three trials was used for analysis. The difference between taped and non-taped conditions for each test was further calculated through standardized effect ± 90% confidence intervals (CI) with clinical magnitude-based inference (MBI). Paired samples T-test showed significant decrease in ND (-4.68 ± 1.44mm; 95% CI: -3.77, -5.60; p < 0.05) while MBI demonstrated most likely beneficial and large effect (standardize effect: -1.59 ± 0.27) in LDT condition. For DJ test, significant increase in both flight time (25.25 ± 29.96ms; 95% CI: 6.22, 44.28; p < 0.05) and RSI (0.22 ± 0.22; 95% CI: 0.08, 0.36; p < 0.05) were observed. In taped condition, MBI showed very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.49) in flight time, possibly beneficial and small effect (standardized effect: -0.26 ± 0.29) in ground contact time and very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.42) in RSI. No significant difference in CMJ was observed (95% CI: -2.73, 2.08; p > 0.05). For basketball players with pes planus, applying LDT could substantially support the foot by elevating the navicular height and potentially provide acute beneficial effects in reactive strength performance. Meanwhile, no significant harmful effect on CMJ was observed. Basketball players may consider applying LDT before the game or training to enhance the reactive strength performance. However since the observed effects in this study could not generalize to other players without excessive foot pronation, further studies on players with normal foot arch or navicular height are recommended.

Keywords: flight time, pes planus, pronated foot, reactive strength index

Procedia PDF Downloads 129
22 Accurate Energy Assessment Technique for Mine-Water District Heat Network

Authors: B. Philip, J. Littlewood, R. Radford, N. Evans, T. Whyman, D. P. Jones

Abstract:

UK buildings and energy infrastructures are heavily dependent on natural gas, a large proportion of which is used for domestic space heating. However, approximately half of the gas consumed in the UK is imported. Improving energy security and reducing carbon emissions are major government drivers for reducing gas dependency. In order to do so there needs to be a wholesale shift in the energy provision to householders without impacting on thermal comfort levels, convenience or cost of supply to the end user. Heat pumps are seen as a potential alternative in modern well insulated homes, however, can the same be said of older homes? A large proportion of housing stock in Britain was built prior to 1919. The age of the buildings bears testimony to the quality of construction; however, their thermal performance falls far below the minimum currently set by UK building standards. In recent years significant sums of money have been invested to improve energy efficiency and combat fuel poverty in some of the most deprived areas of Wales. Increasing energy efficiency of older properties remains a significant challenge, which cannot be achieved through insulation and air-tightness interventions alone, particularly when alterations to historically important architectural features of the building are not permitted. This paper investigates the energy demand of pre-1919 dwellings in a former Welsh mining village, the feasibility of meeting that demand using water from the disused mine workings to supply a district heat network and potential barriers to success of the scheme. The use of renewable solar energy generation and storage technologies, both thermal and electrical, to reduce the load and offset increased electricity demand, are considered. A wholistic surveying approach to provide a more accurate assessment of total household heat demand is proposed. Several surveying techniques, including condition surveys, air permeability, heat loss calculations, and thermography were employed to provide a clear picture of energy demand. Additional insulation can bring unforeseen consequences which are detrimental to the fabric of the building, potentially leading to accelerated dilapidation of the asset being ‘protected’. Increasing ventilation should be considered in parallel, to compensate for the associated reduction in uncontrolled infiltration. The effectiveness of thermal performance improvements are demonstrated and the detrimental effects of incorrect material choice and poor installation are highlighted. The findings show estimated heat demand to be in close correlation to household energy bills. Major areas of heat loss were identified such that improvements to building thermal performance could be targeted. The findings demonstrate that the use of heat pumps in older buildings is viable, provided sufficient improvement to thermal performance is possible. Addition of passive solar thermal and photovoltaic generation can help reduce the load and running cost for the householder. The results were used to predict future heat demand following energy efficiency improvements, thereby informing the size of heat pumps required.

Keywords: heat demand, heat pump, renewable energy, retrofit

Procedia PDF Downloads 72
21 Flood Early Warning and Management System

Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare

Abstract:

The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.

Keywords: flood, modeling, HPC, FOSS

Procedia PDF Downloads 62
20 A Next-Generation Pin-On-Plate Tribometer for Use in Arthroplasty Material Performance Research

Authors: Lewis J. Woollin, Robert I. Davidson, Paul Watson, Philip J. Hyde

Abstract:

Introduction: In-vitro testing of arthroplasty materials is of paramount importance when ensuring that they can withstand the performance requirements encountered in-vivo. One common machine used for in-vitro testing is a pin-on-plate tribometer, an early stage screening device that generates data on the wear characteristics of arthroplasty bearing materials. These devices test vertically loaded rotating cylindrical pins acting against reciprocating plates, representing the bearing surfaces. In this study, a pin-on-plate machine has been developed that provides several improvements over current technology, thereby progressing arthroplasty bearing research. Historically, pin-on-plate tribometers have been used to investigate the performance of arthroplasty bearing materials under conditions commonly encountered during a standard gait cycle; nominal operating pressures of 2-6 MPa and an operating frequency of 1 Hz are typical. There has been increased interest in using pin-on-plate machines to test more representative in-vivo conditions, due to the drive to test 'beyond compliance', as well as their testing speed and economic advantages over hip simulators. Current pin-on-plate machines do not accommodate the increased performance requirements associated with more extreme kinematic conditions, therefore a next-generation pin-on-plate tribometer has been developed to bridge the gap between current technology and future research requirements. Methodology: The design was driven by several physiologically relevant requirements. Firstly, an increased loading capacity was essential to replicate the peak pressures that occur in the natural hip joint during running and chair-rising, as well as increasing the understanding of wear rates in obese patients. Secondly, the introduction of mid-cycle load variation was of paramount importance, as this allows for an approximation of the loads present in a gait cycle to be applied and to test the fatigue properties of materials. Finally, the rig must be validated against previous-generation pin-on-plate and arthroplasty wear data. Results: The resulting machine is a twelve station device that is split into three sets of four stations, providing an increased testing capacity compared to most current pin-on-plate tribometers. The loading of the pins is generated using a pneumatic system, which can produce contact pressures of up to 201 MPa on a 3.2 mm² round pin face. This greatly exceeds currently achievable contact pressures in literature and opens new research avenues such as testing rim wear of mal-positioned hip implants. Additionally, the contact pressure of each set can be changed independently of the others, allowing multiple loading conditions to be tested simultaneously. Using pneumatics also allows the applied pressure to be switched ON/OFF mid-cycle, another feature not currently reported elsewhere, which allows for investigation into intermittent loading and material fatigue. The device is currently undergoing a series of validation tests using Ultra-High-Molecular-Weight-Polyethylene pins and 316L Stainless Steel Plates (polished to a Ra < 0.05 µm). The operating pressures will be between 2-6 MPa, operating at 1 Hz, allowing for validation of the machine against results reported previously in the literature. The successful production of this next-generation pin-on-plate tribometer will, following its validation, unlock multiple previously unavailable research avenues.

Keywords: arthroplasty, mechanical design, pin-on-plate, total joint replacement, wear testing

Procedia PDF Downloads 71
19 Microplastics in Fish from Grenada, West Indies: Problems and Opportunities

Authors: Michelle E. Taylor, Clare E. Morrall

Abstract:

Microplastics are small particles produced for industrial purposes or formed by breakdown of anthropogenic debris. Caribbean nations import large quantities of plastic products. The Caribbean region is vulnerable to natural disasters and Climate Change is predicted to bring multiple additional challenges to island nations. Microplastics have been found in an array of marine environments and in a diversity of marine species. Occurrence of microplastic in the intestinal tracts of marine fish is a concern to human and ecosystem health as pollutants and pathogens can associate with plastics. Studies have shown that the incidence of microplastics in marine fish varies with species and location. Prevalence of microplastics (≤ 5 mm) in fish species from Grenadian waters (representing pelagic, semi-pelagic and demersal lifestyles) harvested for human consumption have been investigated via gut analysis. Harvested tissue was digested in 10% KOH and particles retained on a 0.177 mm sieve were examined. Microplastics identified have been classified according to type, colour and size. Over 97% of fish examined thus far (n=34) contained microplastics. Current and future work includes examining the invasive Lionfish (Pterois spp.) for microplastics, investigating marine invertebrate species as well as examining environmental sources of microplastics (i.e. rivers, coastal waters and sand). Owing to concerns of pollutant accumulation on microplastics and potential migration into organismal tissues, we plan to analyse fish tissue for mercury and other persistent pollutants. Despite having ~110,000 inhabitants, the island nation of Grenada imported approximately 33 million plastic bottles in 2013, of which it is estimated less than 5% were recycled. Over 30% of the imported bottles were ‘unmanaged’, and as such are potential litter/marine debris. A revised Litter Abatement Act passed into law in Grenada in 2015, but little enforcement of the law is evident to date. A local Non-governmental organization (NGO) ‘The Grenada Green Group’ (G3) is focused on reducing litter in Grenada through lobbying government to implement the revised act and running sessions in schools, community groups and on local media and social media to raise awareness of the problems associated with plastics. A local private company has indicated willingness to support an Anti-Litter Campaign in 2018 and local awareness of the need for a reduction of single use plastic use and litter seems to be high. The Government of Grenada have called for a Sustainable Waste Management Strategy and a ban on both Styrofoam and plastic grocery bags are among recommendations recently submitted. A Styrofoam ban will be in place at the St. George’s University campus from January 1st, 2018 and many local businesses have already voluntarily moved away from Styrofoam. Our findings underscore the importance of continuing investigations into microplastics in marine life; this will contribute to understanding the associated health risks. Furthermore, our findings support action to mitigate the volume of plastics entering the world’s oceans. We hope that Grenada’s future will involve a lot less plastic. This research was supported by the Caribbean Node of the Global Partnership on Marine Litter.

Keywords: Caribbean, microplastics, pollution, small island developing nation

Procedia PDF Downloads 182
18 Evidence Based Dietary Pattern in South Asian Patients: Setting Goals

Authors: Ananya Pappu, Sneha Mishra

Abstract:

Introduction: The South Asian population experiences unique health challenges that predisposes this demographic to cardiometabolic diseases at lower BMIs. South Asians may therefore benefit from recommendations specific to their cultural needs. Here, we focus on current BMI guidelines for Asians with a discussion of South Asian dietary practices and culturally tailored interventions. By integrating traditional dietary practices with modern nutritional recommendations, this manuscript aims to highlight effective strategies to improving health outcomes among South Asians. Background: The South Asian community, including individuals from India, Pakistan, Bangladesh, and Sri Lanka, experiences high rates of cardiovascular diseases, cancers, diabetes, and strokes. Notably, the prevalence of diabetes and cardiovascular disease among Asians is elevated at BMIs below the WHO's standard overweight threshold. As it stands, a BMI of 25-30 kg/m² is considered overweight in non-Asians, while this cutoff is reduced to 23-27.4 kg/m² in Asians. This discrepancy can be attributed to studies which have shown different associations between BMI and health risks in Asians compared to other populations. Given these significant challenges, optimizing lifestyle management for cardiometabolic risk factors is crucial. Tailored interventions that consider cultural context seem to be the best approach for ensuring the success of both dietary and physical activity interventions in South Asian patients. Adopting a whole food, plant-based diet (WFPD) is one such strategy. The WFPD suggests that half of one meal should consist of non-starchy vegetables. In the South Asian diet, this includes traditional vegetables such as okra, tindora, eggplant, and leafy greens including amaranth, collards, chard, and mustards. A quarter of the meal should include plant-based protein sources like cooked beans, lentils, and paneer, with the remaining quarter comprising healthy grains or starches such as whole wheat breads, millets, tapioca, and barley. Adherence to the WFPD has been shown to improve cardiometabolic risk factors including weight, BMI, total cholesterol, HbA1c, and reduces the risk of developing non-alcoholic fatty liver disease (NAFLD). Another approach to improving dietary habits is timing meals. Many of the major cultures and religions in the Indian subcontinent incorporate religious fasting. Time-restricted eating (TRE), also known as intermittent fasting, is a practice akin to traditional fasting, which involves consuming all daily calories within a specific window. TRE has been shown to improve insulin resistance in prediabetic and diabetic patients. Common regimens include completing all meals within an 8-hour window, consuming a low-calorie diet every other day, and the 5:2 diet, which involves fasting twice weekly. These fasting practices align with the natural circadian rhythm, potentially enhancing metabolic health and reducing obesity and diabetes risks. Conclusion: South Asians develop cardiometabolic disease at lower BMIs; hence, it is important to counsel patients about lifestyle interventions that decrease their risk. Traditional South Asian diets can be made more nutrient-rich by incorporating vegetables, plant proteins like lentils and beans, and substituting refined grains for whole grains. Ultimately, the best diet is one to which a patient can adhere. It is therefore important to find a regimen that aligns with a patient’s cultural and traditional food practices.

Keywords: BMI, diet, obesity, South Asian, time-restricted eating

Procedia PDF Downloads 7
17 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing

Authors: Arjun Kumar Rath, Titus Dhanasingh

Abstract:

Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.

Keywords: board diagnostics software, embedded system, hardware testing, test frameworks

Procedia PDF Downloads 116
16 Economic Impacts of Sanctuary and Immigration and Customs Enforcement Policies Inclusive and Exclusive Institutions

Authors: Alexander David Natanson

Abstract:

This paper focuses on the effect of Sanctuary and Immigration and Customs Enforcement (ICE) policies on local economies. "Sanctuary cities" refers to municipal jurisdictions that limit their cooperation with the federal government's efforts to enforce immigration. Using county-level data from the American Community Survey and ICE data on economic indicators from 2006 to 2018, this study isolates the effects of local immigration policies on U.S. counties. The investigation is accomplished by simultaneously studying the policies' effects in counties where immigrants' families are persecuted via collaboration with Immigration and Customs Enforcement (ICE), in contrast to counties that provide protections. The analysis includes a difference-in-difference & two-way fixed effect model. Results are robust to nearest-neighbor matching, after the random assignment of treatment, after running estimations using different cutoffs for immigration policies, and with a regression discontinuity model comparing bordering counties with opposite policies. Results are also robust after restricting the data to a single-year policy adoption, using the Sun and Abraham estimator, and with event-study estimation to deal with the staggered treatment issue. In addition, the study reverses the estimation to understand what drives the decision to choose policies to detect the presence of reverse causality biases in the estimated policy impact on economic factors. The evidence demonstrates that providing protections to undocumented immigrants increases economic activity. The estimates show gains in per capita income ranging from 3.1 to 7.2, median wages between 1.7 to 2.6, and GDP between 2.4 to 4.1 percent. Regarding labor, sanctuary counties saw increases in total employment between 2.3 to 4 percent, and the unemployment rate declined from 12 to 17 percent. The data further shows that ICE policies have no statistically significant effects on income, median wages, or GDP but adverse effects on total employment, with declines from 1 to 2 percent, mostly in rural counties, and an increase in unemployment of around 7 percent in urban counties. In addition, results show a decline in the foreign-born population in ICE counties but no changes in sanctuary counties. The study also finds similar results for sanctuary counties when separating the data between urban, rural, educational attainment, gender, ethnic groups, economic quintiles, and the number of business establishments. The takeaway from this study is that institutional inclusion creates the dynamic nature of an economy, as inclusion allows for economic expansion due to the extension of fundamental freedoms to newcomers. Inclusive policies show positive effects on economic outcomes with no evident increase in population. To make sense of these results, the hypothesis and theoretical model propose that inclusive immigration policies play an essential role in conditioning the effect of immigration by decreasing uncertainties and constraints for immigrants' interaction in their communities, decreasing the cost from fear of deportation or the constant fear of criminalization and optimize their human capital.

Keywords: inclusive and exclusive institutions, post matching, fixed effect, time trend, regression discontinuity, difference-in-difference, randomization inference and sun, Abraham estimator

Procedia PDF Downloads 57
15 An Exploration of Health Promotion Approach to Increase Optimal Complementary Feeding among Pastoral Mothers Having Children between 6 and 23 Months in Dikhil, Djibouti

Authors: Haruka Ando

Abstract:

Undernutrition of children is a critical issue, especially for people in the remote areas of the Republic of Djibouti, since household food insecurity, inadequate child caring and feeding, unhealthy environment and lack of clean water, as well as insufficient maternal and child healthcare, are underlying causes which affect. Nomadic pastoralists living in the Dikhil region (Dikhil) are socio-economically and geographically more vulnerable due to displacement, which in turn worsens the situation of child stunting. A high prevalence of inappropriate complementary feeding among pastoral mothers might be a significant barrier to child growth. This study aims to identify health promotion intervention strategies that would support an increase in optimal complementary feeding among pastoral mothers of children aged 6-23 months in Dikhil. There are four objectives; to explore and to understand the existing practice of complementary feeding among pastoral mothers in Dikhil; to identify the barriers in appropriate complementary feeding among the mothers; to critically explore and analyse the strategies for an increase in complementary feeding among the mothers; to make pragmatic recommendations to address the barriers in Djibouti. This is an in-depth study utilizing a conceptual framework, the behaviour change wheel, to analyse the determinants of complementary feeding and categorize health promotion interventions for increasing optimal complementary feeding among pastoral mothers living in Dikhil. The analytical tool was utilized to appraise the strategies to mitigate the selected barriers against optimal complementary feeding. The data sources were secondary literature from both published and unpublished sources. The literature was systematically collected. The findings of the determinants including the barriers of optimal complementary feeding were identified: heavy household workload, caring for multiple children under five, lack of education, cultural norms and traditional eating habits, lack of husbands' support, poverty and food insecurity, lack of clean water, low media coverage, insufficient health services on complementary feeding, fear, poor personal hygiene, and mothers' low decision-making ability and lack of motivation for food choice. To mitigate selected barriers of optimal complementary feeding, four intervention strategies based on interpersonal communication at the community-level were chosen: scaling up mothers' support groups, nutrition education, grandmother-inclusive approach, and training for complementary feeding counseling. The strategies were appraised through the criteria of effectiveness and feasibility. Scaling up mothers' support groups could be the best approach. Mid-term and long-term recommendations are suggested based on the situation analysis and appraisal of intervention strategies. Mid-term recommendations include complementary feeding promotion interventions are integrated into the healthcare service providing system in Dikhil, and donor agencies advocate and lobby the Ministry of Health Djibouti (MoHD) to increase budgetary allocation on complementary feeding promotion to implement interventions at a community level. Moreover, the recommendations include a community health management team in Dikhil training healthcare workers and mother support groups by using complementary feeding communication guidelines and monitors behaviour change of pastoral mothers and health outcome of their children. Long-term recommendations are the MoHD develops complementary feeding guidelines to cover sector-wide collaboration for multi-sectoral related barriers.

Keywords: Afar, child food, child nutrition, complementary feeding, complementary food, developing countries, Djibouti, East Africa, hard-to-reach areas, Horn of Africa, nomad, pastoral, rural area, Somali, Sub-Saharan Africa

Procedia PDF Downloads 98
14 A Study on Economic Impacts of Entrepreneurial Firms and Self-Employment: Minority Ethnics in Putatan, Penampang, Inanam, Menggatal, Uitm, Tongod, Sabah, Malaysia

Authors: Lizinis Cassendra Frederick Dony, Jirom Jeremy Frederick Dony, Andrew Nicholas, Dewi Binti Tajuddin

Abstract:

Starting and surviving a business is influenced by various entrepreneurship socio-economics activities. The study revealed that some of the entrepreneurs are not registered under SME but running own business as an intermediary with the private organization entrusted as “Self-Employed.” SME is known as “Small Medium Enterprise” contributes growth in Malaysia. Therefore, the entrepreneurialism business interest and entrepreneurial intention enhancing new spurring production, expanding employment opportunities, increasing productivity, promoting exports, stimulating innovation and providing new avenue in the business market place. This study has identified the unique contribution to the full understanding of complex mechanisms through entrepreneurship obstacles and education impacts on happiness and well-being to society. Moreover, “Ethnic” term has defined as a curious meaning refers to a classification of a large group of people customs implies to ancestral, racial, national, tribal, religious, linguistic and cultural origins. It is a social phenomenon.1 According to Sabah data population is amounting to 2,389,494 showed the predominant ethnic group being the Kadazan Dusun (18.4%) followed by Bajau (17.3%) and Malays (15.3%). For the year 2010, data statistic immigrants population report showed the amount to 239,765 people which cover 4% of the Sabahan’s population.2 Sabah has numerous group of talented entrepreneurs. The business environment among the minority ethnics are influenced with the business sentiment competition. The literature on ethnic entrepreneurship recognizes two main type entrepreneurships: the middleman and enclave entrepreneurs. According to Adam Smith,3 there are evidently some principles disposition to admire and maintain the distinction business rank status and cause most universal business sentiments. Due to credit barriers competition, the minority ethnics are losing the business market and since 2014, many illegal immigrants have been found to be using permits of the locals to operate businesses in Malaysia.4 The development of small business entrepreneurship among the minority ethnics in Sabah evidenced based variety of complex perception and differences concepts. The studies also confirmed the effects of heterogeneity on group decision and thinking caused partly by excessive pre-occupation with maintaining cohesiveness and the presence of cultural diversity in groups should reduce its probability.5 The researchers proposed that there are seven success determinants particularly to determine the involvement of minority ethnics comparing to the involvement of the immigrants in Sabah. Although, (SMEs) have always been considered the backbone of the economy development, the minority ethnics are often categorized it as the “second-choice.’ The study showed that illegal immigrants entrepreneur imposed a burden on Sabahan social programs as well as the prison, court and health care systems. The tension between the need for cheap labor and the impulse to protect Malaysian in Sabah workers, entrepreneurs and taxpayers, among the subjects discussed in this study. This is clearly can be advantages and disadvantages to the Sabah economic development.

Keywords: entrepreneurial firms, self-employed, immigrants, minority ethnic, economic impacts

Procedia PDF Downloads 384
13 A Triple Win: Linking Students, Academics, and External Organisations to Provide Real-World Learning Experiences with Real-World Benefits

Authors: Anne E. Goodenough

Abstract:

Students often learn best ‘on the job’ through holistic real-world projects. They need real-world experiences to make classroom learning applicable and to increase their employability. Academics typically value working on projects where new knowledge is created and have a genuine desire to help students engage with learning and develop new skills. They might also have institutional pressure to enhance student engagement, retention, and satisfaction. External organizations - especially non-governmental bodies, charities, and small enterprises - often have fundamental and pressing questions, but lack the manpower and academic expertise to answer them effectively. They might also be on the lookout for talented potential employees. This study examines ways in which these diverse requirements can be met simultaneously by creating three-way projects that provide excellent academic and real-world outcomes for all involved. It studied a range of innovative projects across natural sciences (biology, ecology, physical geography and social sciences (human geography, sociology, criminology, and community engagement) to establish how to best harness the potential of this powerful approach. Focal collaborations included: (1) development of practitioner-linked modules; (2) frameworks where students collected/analyzed data for link organizations in research methods modules; (3) placement-based internships and dissertations; and (4) immersive fieldwork projects in novel locations to allow students engage first-hand with contemporary issues as diverse as rhino poaching in South Africa, segregation in Ireland, and gun crime in Florida. Although there was no ‘magic formula’ for success, the approach was found to work best when small projects were developed that were achievable in a short time-frame, both to tie into modular curricula and meet the immediacy expectations of many link organizations. Bigger projects were found to work well in some cases, especially when they were essentially a series of linked smaller projects, either running concurrently or successively with each building on previous work. Opportunities were maximized when there were tangible benefits to the link organization as this generally increased organization investment in the project and motivated students too. The importance of finding the right approach for a given project was found to be key: it was vital to ensure that something that could work effectively as an independent research project for one student, for example, was not shoehorned into being a project for multiple students within a taught module. In general, students were very positive about collaboration projects. They identified benefits to confidence, time-keeping and communication, as well as conveying their enthusiasm when their work was of benefit to the wider community. Several students have gone on to do further work with the link organization in a voluntary capacity or as paid staff, or used the experiences to help them break into the ever-more competitive job market in other ways. Although this approach involves a substantial time investment, especially from academics, the benefits can be profound. The approach has strong potential to engage students, help retention, improve student satisfaction, and teach new skills; keep the knowledge of academics fresh and current; and provide valuable tangible benefits for link organizations: a real triple win.

Keywords: authentic learning, curriculum development, effective education, employability, higher education, innovative pedagogy, link organizations, student experience

Procedia PDF Downloads 198
12 The Underground Ecosystem of Credit Card Frauds

Authors: Abhinav Singh

Abstract:

Point Of Sale (POS) malwares have been stealing the limelight this year. They have been the elemental factor in some of the biggest breaches uncovered in past couple of years. Some of them include • Target: A Retail Giant reported close to 40 million credit card data being stolen • Home Depot : A home product Retailer reported breach of close to 50 million credit records • Kmart: A US retailer recently announced breach of 800 thousand credit card details. Alone in 2014, there have been reports of over 15 major breaches of payment systems around the globe. Memory scrapping malwares infecting the point of sale devices have been the lethal weapon used in these attacks. These malwares are capable of reading the payment information from the payment device memory before they are being encrypted. Later on these malwares send the stolen details to its parent server. These malwares are capable of recording all the critical payment information like the card number, security number, owner etc. All these information are delivered in raw format. This Talk will cover the aspects of what happens after these details have been sent to the malware authors. The entire ecosystem of credit card frauds can be broadly classified into these three steps: • Purchase of raw details and dumps • Converting them to plastic cash/cards • Shop! Shop! Shop! The focus of this talk will be on the above mentioned points and how they form an organized network of cyber-crime. The first step involves buying and selling of the stolen details. The key point to emphasize are : • How is this raw information been sold in the underground market • The buyer and seller anatomy • Building your shopping cart and preferences • The importance of reputation and vouches • Customer support and replace/refunds These are some of the key points that will be discussed. But the story doesn’t end here. As of now the buyer only has the raw card information. How will this raw information be converted to plastic cash? Now comes in picture the second part of this underground economy where-in these raw details are converted into actual cards. There are well organized services running underground that can help you in converting these details into plastic cards. We will discuss about this technique in detail. At last, the final step involves shopping with the stolen cards. The cards generated with the stolen details can be easily used to swipe-and-pay for purchased goods at different retail shops. Usually these purchases are of expensive items that have good resale value. Apart from using the cards at stores, there are underground services that lets you deliver online orders to their dummy addresses. Once the package is received it will be delivered to the original buyer. These services charge based on the value of item that is being delivered. The overall underground ecosystem of credit card fraud works in a bulletproof way and it involves people working in close groups and making heavy profits. This is a brief summary of what I plan to present at the talk. I have done an extensive research and have collected good deal of material to present as samples. Some of them include: • List of underground forums • Credit card dumps • IRC chats among these groups • Personal chat with big card sellers • Inside view of these forum owners. The talk will be concluded by throwing light on how these breaches are being tracked during investigation. How are credit card breaches tracked down and what steps can financial institutions can build an incidence response over it.

Keywords: POS mawalre, credit card frauds, enterprise security, underground ecosystem

Procedia PDF Downloads 408
11 Lessons Learned through a Bicultural Approach to Tsunami Education in Aotearoa New Zealand

Authors: Lucy H. Kaiser, Kate Boersen

Abstract:

Kura Kaupapa Māori (kura) and bilingual schools are primary schools in Aotearoa/New Zealand which operate fully or partially under Māori custom and have curricula developed to include Te Reo Māori and Tikanga Māori (Māori language and cultural practices). These schools were established to support Māori children and their families through reinforcing cultural identity by enabling Māori language and culture to flourish in the field of education. Māori kaupapa (values), Mātauranga Māori (Māori knowledge) and Te Reo are crucial considerations for the development of educational resources developed for kura, bilingual and mainstream schools. The inclusion of hazard risk in education has become an important issue in New Zealand due to the vulnerability of communities to a plethora of different hazards. Māori have an extensive knowledge of their local area and the history of hazards which is often not appropriately recognised within mainstream hazard education resources. Researchers from the Joint Centre for Disaster Research, Massey University and East Coast LAB (Life at the Boundary) in Napier were funded to collaboratively develop a toolkit of tsunami risk reduction activities with schools located in Hawke’s Bay’s tsunami evacuation zones. A Māori-led bicultural approach to developing and running the education activities was taken, focusing on creating culturally and locally relevant materials for students and schools as well as giving students a proactive role in making their communities better prepared for a tsunami event. The community-based participatory research is Māori-centred, framed by qualitative and Kaupapa Maori research methodologies and utilizes a range of data collection methods including interviews, focus groups and surveys. Māori participants, stakeholders and the researchers collaborated through the duration of the project to ensure the programme would align with the wider school curricula and kaupapa values. The education programme applied a tuakana/teina, Māori teaching and learning approach in which high school aged students (tuakana) developed tsunami preparedness activities to run with primary school students (teina). At the end of the education programme, high school students were asked to reflect on their participation, what they had learned and what they had enjoyed during the activities. This paper draws on lessons learned throughout this research project. As an exemplar, retaining a bicultural and bilingual perspective resulted in a more inclusive project as there was variability across the students’ levels of confidence using Te Reo and Māori knowledge and cultural frameworks. Providing a range of different learning and experiential activities including waiata (Māori songs), pūrākau (traditional stories) and games was important to ensure students had the opportunity to participate and contribute using a range of different approaches that were appropriate to their individual learning needs. Inclusion of teachers in facilitation also proved beneficial in assisting classroom behavioral management. Lessons were framed by the tikanga and kawa (protocols) of the school to maintain cultural safety for the researchers and the students. Finally, the tuakana/teina component of the education activities became the crux of the programme, demonstrating a path for Rangatahi to support their whānau and communities through facilitating disaster preparedness, risk reduction and resilience.

Keywords: school safety, indigenous, disaster preparedness, children, education, tsunami

Procedia PDF Downloads 103
10 A 2-D and 3-D Embroidered Textrode Testing Framework Adhering to ISO Standards

Authors: Komal K., Cleary F., Wells J S.G., Bennett L

Abstract:

Smart fabric garments enable various monitoring applications across sectors such as healthcare, sports and fitness, and the military. Healthcare smart garments monitoring EEG, EMG, and ECG rely on the use of electrodes (dry or wet). However, such electrodes, when used for long-term monitoring, can cause discomfort and skin irritation for the wearer because of their inflexible structure and weight. Ongoing research has been investigating textile-based electrodes (textrodes) in order to provide more comfortable and usable fabric-based electrodes capable of providing intuitive biopotential monitoring. Progress has been made in this space, but they still face a critical design challenge in maintaining consistent skin contact, which directly impacts signal quality. Furthermore, there is a lack of an ISO-based testing framework to validate the electrode design and assess its ability to achieve enhanced performance, strength, usability, and durability. This study proposes the development and evaluation of an ISO-compliant testing framework for standard 2D and advanced 3D embroidered textrodes designs that have a unique structure in order to establish enhanced skin contact for the wearer. This testing framework leverages ISO standards: ISO 13934-1:2013 for tensile and zone-wise strength tests; ISO 13937-2 for tear tests; and ISO 6330 for washing, validating the textrode's performance, a necessity for wearables health parameter monitoring applications. Five textrodes (C1-C5) were designed using EPC win digitization software. Varying patterns such as running stitches, lock stitches, back-to-back stitches, and moss stitches were used to create various embroidered tetrodes samples using Madeira HC12 conductive thread with a resistivity of 100 ohm/m. The textrode designs were then fabricated using a ZSK technical embroidery machine. A comparative analysis was conducted based on a series of laboratory tests adhering to ISO compliance requirements. Tests focusing on the application of strain were applied to the textrodes, and these included: (1) analysis of the electrode's overall surface area strength; (2) assessment of the robustness of the textrodes boundaries; and (3) the assignment of fault test zones to each textrode, where vertical and horizontal slits of 3mm were applied to evaluate the performance of textrodes and its durability. Specific ISO-compliant tests linked to washing were conducted multiple times on each textrode sample to assess both mechanical and chemical damage. Additionally, abrasion and pilling tests were performed to evaluate mechanical damage on the surface of the textrodes and to compare it with the washing test. Finally, the textrodes were assessed based on morphological and surface resistance changes. Results demonstrate that textrode C4, featuring a 3-D layered structure consisting of foam, fabric, and conductive thread layers, significantly enhances skin-electrode contact for biopotential recording. The inclusion of a 3D foam layer was particularly effective in maintaining the shape of the electrode during strain tests, making it the top-performing textrode sample. Therefore, the layered 3D design structure of textrode C4 ranks highest when tested for durability, reusability, and washability. The ISO testing framework established in this study will support future research, validating the durability and reliability of textrodes for a wide range of applications.

Keywords: smart fabric, textrodes, testing framework, ISO compliant

Procedia PDF Downloads 32
9 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 106
8 Basic Characteristics of Synchronized Stir Welding and Its Prospects

Authors: Ipei Sato, Naonori Shibata, Shoji Matsumoto, Naruhito Matsumoto

Abstract:

Friction stir welding (FSW) has been widely used in the automotive, aerospace, and high-tech industries due to its superiority in mechanical properties after joining. In order to achieve a good quality joint by friction stir welding (FSW), it is necessary to secure an advanced angle (usually 3 to 5 degrees) using a dedicated FSW machine and to join on a highly rigid machine. On the other hand, although recently, a new combined machine that combines the cutting function of a conventional machining center with the FSW function has appeared on the market, its joining process window is small, so joining defects easily occur, and it lacks reproducibility, which limits its application to the automotive industry, where control accuracy is required. This has limited the use of FSW machines in the automotive industry, where control accuracy is required. FSW-only machines or hybrid equipment that combines FSW and cutting machines require high capital investment costs, which is one of the reasons why FSW itself has not penetrated the market. Synchronized stir welding, a next-generation joining technology developed by our company, requires no tilt angle and is a very cost-effective method of welding. It is a next-generation joining technology that does not require a tilt angle, does not require a complicated spindle mechanism, and minimizes the load and vibration on the spindle, temperature during joining, and shoulder diameter, thereby enabling a wide range of joining conditions and high-strength, high-speed joining with no joining defects. In synchronized stir welding, the tip of the joining tool is "driven by microwaves" in both the rotational and vertical directions of the tool. The tool is synchronized and stirred in the direction and at the speed required by the material to be stirred in response to the movement required by the material to be welded, enabling welding that exceeds conventional concepts. Conventional FSW is passively stirred by an external driving force, resulting in low joining speeds and high heat input due to the need for a large shoulder diameter. In contrast, SSW is characterized by the fact that materials are actively stirred in synchronization with the direction and speed in which they are to be stirred, resulting in a high joining speed and a small shoulder diameter, which allows joining to be completed with low heat input. The advantages of synchronized stir welding technology in terms of basic mechanical properties are described. The superiority of the basic mechanical properties of SSW over FSW was evaluated as a comparison of the strength of the joint cross section in the comparison between FSW and SSW. SSW, compared to FSW, has tensile strength; base metal 242 MPa/217 MPa after FSW 89%, base metal 242 MPa/225 MPa after SSW 93%. Vickers hardness; base metal 75.0HV/FSW; 57.5HV 76% SSW; 66.0HV 88% (weld center), showing excellent results. In the tensile test, the material used was aluminum (A5052-H112) plate 5 mm thick, and the specimen was dumbbell-shaped, 2 mm thick, 4 mm wide, and 60 mm long. Measurements were made at a loading speed of 20%/min (in accordance with Z 2241:2022). Tensile testing machine: INSTRON Japan, model: INSTRON 5982. Vickers hardness was measured on a 5 mm thick specimen of A5052 tempered H112 with a width of 15 mm at 0.3 pitch (in accordance with JIS Z 2244:2020). Vickers tester: FUTURE-TECH Model: FM-300.

Keywords: FSW, SSW, synchronized stir welding, requires no tilt angles, running peak temperature less than 100 degrees C

Procedia PDF Downloads 22
7 Self-Medication with Antibiotics, Evidence of Factors Influencing the Practice in Low and Middle-Income Countries: A Systematic Scoping Review

Authors: Neusa Fernanda Torres, Buyisile Chibi, Lyn E. Middleton, Vernon P. Solomon, Tivani P. Mashamba-Thompson

Abstract:

Background: Self-medication with antibiotics (SMA) is a global concern, with a higher incidence in low and middle-income countries (LMICs). Despite intense world-wide efforts to control and promote the rational use of antibiotics, continuing practices of SMA systematically exposes individuals and communities to the risk of antibiotic resistance and other undesirable antibiotic side effects. Moreover, it increases the health systems costs of acquiring more powerful antibiotics to treat the resistant infection. This review thus maps evidence on the factors influencing self-medication with antibiotics in these settings. Methods: The search strategy for this review involved electronic databases including PubMed, Web of Knowledge, Science Direct, EBSCOhost (PubMed, CINAHL with Full Text, Health Source - Consumer Edition, MEDLINE), Google Scholar, BioMed Central and World Health Organization library, using the search terms:’ Self-Medication’, ‘antibiotics’, ‘factors’ and ‘reasons’. Our search included studies published from 2007 to 2017. Thematic analysis was performed to identify the patterns of evidence on SMA in LMICs. The mixed method quality appraisal tool (MMAT) version 2011 was employed to assess the quality of the included primary studies. Results: Fifteen studies met the inclusion criteria. Studies included population from the rural (46,4%), urban (33,6%) and combined (20%) settings, of the following LMICs: Guatemala (2 studies), India (2), Indonesia (2), Kenya (1), Laos (1), Nepal (1), Nigeria (2), Pakistan (2), Sri Lanka (1), and Yemen (1). The total sample size of all 15 included studies was 7676 participants. The findings of the review show a high prevalence of SMA ranging from 8,1% to 93%. Accessibility, affordability, conditions of health facilities (long waiting, quality of services and workers) as long well as poor health-seeking behavior and lack of information are factors that influence SMA in LMICs. Antibiotics such as amoxicillin, metronidazole, amoxicillin/clavulanic, ampicillin, ciprofloxacin, azithromycin, penicillin, and tetracycline, were the most frequently used for SMA. The major sources of antibiotics included pharmacies, drug stores, leftover drugs, family/friends and old prescription. Sore throat, common cold, cough with mucus, headache, toothache, flu-like symptoms, pain relief, fever, running nose, toothache, upper respiratory tract infections, urinary symptoms, urinary tract infection were the common disease symptoms managed with SMA. Conclusion: Although the information on factors influencing SMA in LMICs is unevenly distributed, the available information revealed the existence of research evidence on antibiotic self-medication in some countries of LMICs. SMA practices are influenced by social-cultural determinants of health and frequently associated with poor dispensing and prescribing practices, deficient health-seeking behavior and consequently with inappropriate drug use. Therefore, there is still a need to conduct further studies (qualitative, quantitative and randomized control trial) on factors and reasons for SMA to correctly address the public health problem in LMICs.

Keywords: antibiotics, factors, reasons, self-medication, low and middle-income countries (LMICs)

Procedia PDF Downloads 185
6 Computational, Human, and Material Modalities: An Augmented Reality Workflow for Building form Found Textile Structures

Authors: James Forren

Abstract:

This research paper details a recent demonstrator project in which digital form found textile structures were built by human craftspersons wearing augmented reality (AR) head-worn displays (HWDs). The project utilized a wet-state natural fiber / cementitious matrix composite to generate minimal bending shapes in tension which, when cured and rotated, performed as minimal-bending compression members. The significance of the project is that it synthesizes computational structural simulations with visually guided handcraft production. Computational and physical form-finding methods with textiles are well characterized in the development of architectural form. One difficulty, however, is physically building computer simulations: often requiring complicated digital fabrication workflows. However, AR HWDs have been used to build a complex digital form from bricks, wood, plastic, and steel without digital fabrication devices. These projects utilize, instead, the tacit knowledge motor schema of the human craftsperson. Computational simulations offer unprecedented speed and performance in solving complex structural problems. Human craftspersons possess highly efficient complex spatial reasoning motor schemas. And textiles offer efficient form-generating possibilities for individual structural members and overall structural forms. This project proposes that the synthesis of these three modalities of structural problem-solving – computational, human, and material - may not only develop efficient structural form but offer further creative potentialities when the respective intelligence of each modality is productively leveraged. The project methodology pertains to its three modalities of production: 1) computational, 2) human, and 3) material. A proprietary three-dimensional graphic statics simulator generated a three-legged arch as a wireframe model. This wireframe was discretized into nine modules, three modules per leg. Each module was modeled as a woven matrix of one-inch diameter chords. And each woven matrix was transmitted to a holographic engine running on HWDs. Craftspersons wearing the HWDs then wove wet cementitious chords within a simple falsework frame to match the minimal bending form displayed in front of them. Once the woven components cured, they were demounted from the frame. The components were then assembled into a full structure using the holographically displayed computational model as a guide. The assembled structure was approximately eighteen feet in diameter and ten feet in height and matched the holographic model to under an inch of tolerance. The construction validated the computational simulation of the minimal bending form as it was dimensionally stable for a ten-day period, after which it was disassembled. The demonstrator illustrated the facility with which computationally derived, a structurally stable form could be achieved by the holographically guided, complex three-dimensional motor schema of the human craftsperson. However, the workflow traveled unidirectionally from computer to human to material: failing to fully leverage the intelligence of each modality. Subsequent research – a workshop testing human interaction with a physics engine simulation of string networks; and research on the use of HWDs to capture hand gestures in weaving seeks to develop further interactivity with rope and chord towards a bi-directional workflow within full-scale building environments.

Keywords: augmented reality, cementitious composites, computational form finding, textile structures

Procedia PDF Downloads 138
5 Mobi-DiQ: A Pervasive Sensing System for Delirium Risk Assessment in Intensive Care Unit

Authors: Subhash Nerella, Ziyuan Guan, Azra Bihorac, Parisa Rashidi

Abstract:

Intensive care units (ICUs) provide care to critically ill patients in severe and life-threatening conditions. However, patient monitoring in the ICU is limited by the time and resource constraints imposed on healthcare providers. Many critical care indices such as mobility are still manually assessed, which can be subjective, prone to human errors, and lack granularity. Other important aspects, such as environmental factors, are not monitored at all. For example, critically ill patients often experience circadian disruptions due to the absence of effective environmental “timekeepers” such as the light/dark cycle and the systemic effect of acute illness on chronobiologic markers. Although the occurrence of delirium is associated with circadian disruption risk factors, these factors are not routinely monitored in the ICU. Hence, there is a critical unmet need to develop systems for precise and real-time assessment through novel enabling technologies. We have developed the mobility and circadian disruption quantification system (Mobi-DiQ) by augmenting biomarker and clinical data with pervasive sensing data to generate mobility and circadian cues related to mobility, nightly disruptions, and light and noise exposure. We hypothesize that Mobi-DiQ can provide accurate mobility and circadian cues that correlate with bedside clinical mobility assessments and circadian biomarkers, ultimately important for delirium risk assessment and prevention. The collected multimodal dataset consists of depth images, Electromyography (EMG) data, patient extremity movement captured by accelerometers, ambient light levels, Sound Pressure Level (SPL), and indoor air quality measured by volatile organic compounds, and the equivalent CO₂ concentration. For delirium risk assessment, the system recognizes mobility cues (axial body movement features and body key points) and circadian cues, including nightly disruptions, ambient SPL, and light intensity, as well as other environmental factors such as indoor air quality. The Mobi-DiQ system consists of three major components: the pervasive sensing system, a data storage and analysis server, and a data annotation system. For data collection, six local pervasive sensing systems were deployed, including a local computer and sensors. A video recording tool with graphical user interface (GUI) developed in python was used to capture depth image frames for analyzing patient mobility. All sensor data is encrypted, then automatically uploaded to the Mobi-DiQ server through a secured VPN connection. Several data pipelines are developed to automate the data transfer, curation, and data preparation for annotation and model training. The data curation and post-processing are performed on the server. A custom secure annotation tool with GUI was developed to annotate depth activity data. The annotation tool is linked to the MongoDB database to record the data annotation and to provide summarization. Docker containers are also utilized to manage services and pipelines running on the server in an isolated manner. The processed clinical data and annotations are used to train and develop real-time pervasive sensing systems to augment clinical decision-making and promote targeted interventions. In the future, we intend to evaluate our system as a clinical implementation trial, as well as to refine and validate it by using other data sources, including neurological data obtained through continuous electroencephalography (EEG).

Keywords: deep learning, delirium, healthcare, pervasive sensing

Procedia PDF Downloads 65
4 Geomechanics Properties of Tuzluca (Eastern. Turkey) Bedded Rock Salt and Geotechnical Safety

Authors: Mehmet Salih Bayraktutan

Abstract:

Geomechanical properties of Rock Salt Deposits in Tuzluca Salt Mine Area (Eastern Turkey) are studied for modeling the operation- excavation strategy. The purpose of this research focused on calculating the critical value of span height- which will meet the safety requirements. The Mine Site Tuzluca Hills consist of alternating parallel bedding of Salt ( NaCl ) and Gypsum ( CaS04 + 2 H20) rocks. Rock Salt beds are more resistant than narrow Gypsum interlayers. Rock Salt beds formed almost 97 percent of the total height of the Hill. Therefore, the geotechnical safety of Galleries depends on the mechanical criteria of Rock Salt Cores. General deposition of Tuzluca Basin was finally completed by Tuzluca Evaporites, as for the uppermost stratigraphic unit. They are currently running mining operations performed by classic mechanical excavation, room and pillar method. Rooms and Pillars are currently experiencing an initial stage of fracturing in places. Geotechnical safety of the whole mining area evaluated by Rock Mass Rating (RMR), Rock Quality Designation (RQD) spacing of joints, and the interaction of groundwater and fracture system. In general, bedded rock salt Show large lateral deformation capacity (while deformation modulus stays in relative small values, here E= 9.86 GPa). In such litho-stratigraphic environments, creep is a critical mechanism in failure. Rock Salt creep rate in steady-state is greater than interbedding layers. Under long-lasted compressive stresses, creep may cause shear displacements, partly using bedding planes. Eventually, steady-state creep in time returns to accelerated stages. Uniaxial compression creep tests on specimens were performed to have an idea of rock salt strength. To give an idea, on Rock Salt cores, average axial strength and strain are found as 18 - 24 MPa and 0.43-0.45 %, respectively. Uniaxial Compressive strength of 26- 32 MPa, from bedded rock salt cores. Elastic modulus is comparatively low, but lateral deformation of the rock salt is high under the uniaxial compression stress state. Poisson ratio = 0.44, break load = 156 kN, cohesion c= 12.8 kg/cm2, specific gravity SG=2.17 gr/cm3. Fracture System; spacing of fractures, joints, faults, offsets are evaluated under acting geodynamic mechanism. Two sand beds, each 4-6 m thick, exist near to upper level and at the top of the evaporating sequence. They act as aquifers and keep infiltrated water on top for a long duration, which may result in the failure of roofs or pillars. Two major active seismic ( N30W and N70E ) striking Fault Planes and parallel fracture strands have seismically triggered moderate risk of structural deformation of rock salt bedding sequence. Earthquakes and Floods are two prevailing sources of geohazards in this region—the seismotectonic activity of the Mine Site based on the crossing framework of Kagizman Faults and Igdir Faults. Dominant Hazard Risk sources include; a) Weak mechanical properties of rock salt, gypsum, anhydrite beds-creep. b) Physical discontinuities cutting across the thick parallel layers of Evaporite Mass, c) Intercalated beds of weak cemented or loose sand, clayey sandy sediments. On the other hand, absorbing the effects of salt-gyps parallel bedded deposits on seismic wave amplitudes has a reducing effect on the Rock Mass.

Keywords: bedded rock salt, creep, failure mechanism, geotechnical safety

Procedia PDF Downloads 167
3 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”

Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid

Abstract:

Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.

Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps

Procedia PDF Downloads 457
2 The Integration of Digital Humanities into the Sociology of Knowledge Approach to Discourse Analysis

Authors: Gertraud Koch, Teresa Stumpf, Alejandra Tijerina García

Abstract:

Discourse analysis research approaches belong to the central research strategies applied throughout the humanities; they focus on the countless forms and ways digital texts and images shape present-day notions of the world. Despite the constantly growing number of relevant digital, multimodal discourse resources, digital humanities (DH) methods are thus far not systematically developed and accessible for discourse analysis approaches. Specifically, the significance of multimodality and meaning plurality modelling are yet to be sufficiently addressed. In order to address this research gap, the D-WISE project aims to develop a prototypical working environment as digital support for the sociology of knowledge approach to discourse analysis and new IT-analysis approaches for the use of context-oriented embedding representations. Playing an essential role throughout our research endeavor is the constant optimization of hermeneutical methodology in the use of (semi)automated processes and their corresponding epistemological reflection. Among the discourse analyses, the sociology of knowledge approach to discourse analysis is characterised by the reconstructive and accompanying research into the formation of knowledge systems in social negotiation processes. The approach analyses how dominant understandings of a phenomenon develop, i.e., the way they are expressed and consolidated by various actors in specific arenas of discourse until a specific understanding of the phenomenon and its socially accepted structure are established. This article presents insights and initial findings from D-WISE, a joint research project running since 2021 between the Institute of Anthropological Studies in Culture and History and the Language Technology Group of the Department of Informatics at the University of Hamburg. As an interdisciplinary team, we develop central innovations with regard to the availability of relevant DH applications by building up a uniform working environment, which supports the procedure of the sociology of knowledge approach to discourse analysis within open corpora and heterogeneous, multimodal data sources for researchers in the humanities. We are hereby expanding the existing range of DH methods by developing contextualized embeddings for improved modelling of the plurality of meaning and the integrated processing of multimodal data. The alignment of this methodological and technical innovation is based on the epistemological working methods according to grounded theory as a hermeneutic methodology. In order to systematically relate, compare, and reflect the approaches of structural-IT and hermeneutic-interpretative analysis, the discourse analysis is carried out both manually and digitally. Using the example of current discourses on digitization in the healthcare sector and the associated issues regarding data protection, we have manually built an initial data corpus of which the relevant actors and discourse positions are analysed in conventional qualitative discourse analysis. At the same time, we are building an extensive digital corpus on the same topic based on the use and further development of entity-centered research tools such as topic crawlers and automated newsreaders. In addition to the text material, this consists of multimodal sources such as images, video sequences, and apps. In a blended reading process, the data material is filtered, annotated, and finally coded with the help of NLP tools such as dependency parsing, named entity recognition, co-reference resolution, entity linking, sentiment analysis, and other project-specific tools that are being adapted and developed. The coding process is carried out (semi-)automated by programs that propose coding paradigms based on the calculated entities and their relationships. Simultaneously, these can be specifically trained by manual coding in a closed reading process and specified according to the content issues. Overall, this approach enables purely qualitative, fully automated, and semi-automated analyses to be compared and reflected upon.

Keywords: entanglement of structural IT and hermeneutic-interpretative analysis, multimodality, plurality of meaning, sociology of knowledge approach to discourse analysis

Procedia PDF Downloads 202
1 Detailed Degradation-Based Model for Solid Oxide Fuel Cells Long-Term Performance

Authors: Mina Naeini, Thomas A. Adams II

Abstract:

Solid Oxide Fuel Cells (SOFCs) feature high electrical efficiency and generate substantial amounts of waste heat that make them suitable for integrated community energy systems (ICEs). By harvesting and distributing the waste heat through hot water pipelines, SOFCs can meet thermal demand of the communities. Therefore, they can replace traditional gas boilers and reduce greenhouse gas (GHG) emissions. Despite these advantages of SOFCs over competing power generation units, this technology has not been successfully commercialized in large-scale to replace traditional generators in ICEs. One reason is that SOFC performance deteriorates over long-term operation, which makes it difficult to find the proper sizing of the cells for a particular ICE system. In order to find the optimal sizing and operating conditions of SOFCs in a community, a proper knowledge of degradation mechanisms and effects of operating conditions on SOFCs long-time performance is required. The simplified SOFC models that exist in the current literature usually do not provide realistic results since they usually underestimate rate of performance drop by making too many assumptions or generalizations. In addition, some of these models have been obtained from experimental data by curve-fitting methods. Although these models are valid for the range of operating conditions in which experiments were conducted, they cannot be generalized to other conditions and so have limited use for most ICEs. In the present study, a general, detailed degradation-based model is proposed that predicts the performance of conventional SOFCs over a long period of time at different operating conditions. Conventional SOFCs are composed of Yttria Stabilized Zirconia (YSZ) as electrolyte, Ni-cermet anodes, and LaSr₁₋ₓMnₓO₃ (LSM) cathodes. The following degradation processes are considered in this model: oxidation and coarsening of nickel particles in the Ni-cermet anodes, changes in the pore radius in anode, electrolyte, and anode electrical conductivity degradation, and sulfur poisoning of the anode compartment. This model helps decision makers discover the optimal sizing and operation of the cells for a stable, efficient performance with the fewest assumptions. It is suitable for a wide variety of applications. Sulfur contamination of the anode compartment is an important cause of performance drop in cells supplied with hydrocarbon-based fuel sources. H₂S, which is often added to hydrocarbon fuels as an odorant, can diminish catalytic behavior of Ni-based anodes by lowering their electrochemical activity and hydrocarbon conversion properties. Therefore, the existing models in the literature for H₂-supplied SOFCs cannot be applied to hydrocarbon-fueled SOFCs as they only account for the electrochemical activity reduction. A regression model is developed in the current work for sulfur contamination of the SOFCs fed with hydrocarbon fuel sources. The model is developed as a function of current density and H₂S concentration in the fuel. To the best of authors' knowledge, it is the first model that accounts for impact of current density on sulfur poisoning of cells supplied with hydrocarbon-based fuels. Proposed model has wide validity over a range of parameters and is consistent across multiple studies by different independent groups. Simulations using the degradation-based model illustrated that SOFCs voltage drops significantly in the first 1500 hours of operation. After that, cells exhibit a slower degradation rate. The present analysis allowed us to discover the reason for various degradation rate values reported in literature for conventional SOFCs. In fact, the reason why literature reports very different degradation rates, is that literature is inconsistent in definition of how degradation rate is calculated. In the literature, the degradation rate has been calculated as the slope of voltage versus time plot with the unit of voltage drop percentage per 1000 hours operation. Due to the nonlinear profile of voltage over time, degradation rate magnitude depends on the magnitude of time steps selected to calculate the curve's slope. To avoid this issue, instantaneous rate of performance drop is used in the present work. According to a sensitivity analysis, the current density has the highest impact on degradation rate compared to other operating factors, while temperature and hydrogen partial pressure affect SOFCs performance less. The findings demonstrated that a cell running at lower current density performs better in long-term in terms of total average energy delivered per year, even though initially it generates less power than if it had a higher current density. This is because of the dominant and devastating impact of large current densities on the long-term performance of SOFCs, as explained by the model.

Keywords: degradation rate, long-term performance, optimal operation, solid oxide fuel cells, SOFCs

Procedia PDF Downloads 108