Search results for: energy efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12773

Search results for: energy efficiency

683 Re-Framing Resilience Turn in Risk and Management with Anti-Positivistic Perspective of Holling's Early Work

Authors: Jose CanIzares

Abstract:

In the last decades, resilience has received much attention in relation to understanding and managing new forms of risk, especially in the context of urban adaptation to climate change. There are abundant concerns, however, on how to best interpret resilience and related ideas, and on whether they can guide ethically appropriate risk-related or adaptation efforts. Narrative creation and framing are critical steps in shaping public discussion and policy in large-scale interventions, since they favor or inhibit early decision and interpretation habits, which can be morally sensitive and then become persistent on time. This article adds to such framing process by contesting a conventional narrative on resilience and offering an alternative one. Conventionally, present ideas on resilience are traced to the work of ecologist C. S. Holling, especially to his article Resilience and Stability in Ecosystems. This article is usually portrayed as a contribution of complex systems thinking to theoretical ecology, where Holling appeals to resilience in order to challenge received views on ecosystem stability and the diversity-stability hypothesis. In this regard, resilience is construed as a “purely scientific”, precise and descriptive concept, denoting a complex property that allows ecosystems to persist, or to maintain functions, after disturbance. Yet, these formal features of resilience supposedly changed with Holling’s later work in the 90s, where, it is argued, Holling begun to use resilience as a more pragmatic “boundary term”, aimed at unifying transdisciplinary research about risks, ecological or otherwise, and at articulating public debate and governance strategies on the issue. In the conventional story, increased vagueness and degrees of normativity are the price to pay for this conceptual shift, which has made the term more widely usable, but also incompatible with scientific purposes and morally problematic (if not completely objectionable). This paper builds on a detailed analysis of Holling’s early work to propose an alternative narrative. The study will show that the “complexity turn” has often entangled theoretical and pragmatic aims. Accordingly, Holling’s primary aim was to fight what he termed “pathologies of natural resource management” or “pathologies of command and control management”, and so, the terms of his reform of ecosystem science are partly subordinate to the details of his proposal for reforming the management sciences. As regards resilience, Holling used it as a polysemous, ambiguous and normative term: sometimes, as an instrumental value that is closely related to various stability concepts; other times, and more crucially, as an intrinsic value and a tool for attacking efficiency and instrumentalism in management. This narrative reveals the limitations of its conventional alternative and has several practical advantages. It captures well the structure and purposes of Holling’s project, and the various roles of resilience in it. It helps to link Holling’s early work with other philosophical and ideological shifts at work in the 70s. It highlights the currency of Holling’s early work for present research and action in fields such as risk and climate adaptation. And it draws attention to morally relevant aspects of resilience that the conventional narrative neglects.

Keywords: resilience, complexity turn, risk management, positivistic, framing

Procedia PDF Downloads 149
682 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs

Authors: S. Lertsakulpasuk, S. Athichanagorn

Abstract:

As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.

Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs

Procedia PDF Downloads 427
681 Synthesis and Characterization of LiCoO2 Cathode Material by Sol-Gel Method

Authors: Nur Azilina Abdul Aziz, Tuti Katrina Abdullah, Ahmad Azmin Mohamad

Abstract:

Lithium-transition metals and some of their oxides, such as LiCoO2, LiMn2O2, LiFePO4, and LiNiO2 have been used as cathode materials in high performance lithium-ion rechargeable batteries. Among the cathode materials, LiCoO2 has potential to been widely used as a lithium-ion battery because of its layered crystalline structure, good capacity, high cell voltage, high specific energy density, high power rate, low self-discharge, and excellent cycle life. This cathode material has been widely used in commercial lithium-ion batteries due to its low irreversible capacity loss and good cycling performance. However, there are several problems that interfere with the production of material that has good electrochemical properties, including the crystallinity, the average particle size and particle size distribution. In recent years, synthesis of nanoparticles has been intensively investigated. Powders prepared by the traditional solid-state reaction have a large particle size and broad size distribution. On the other hand, solution method can reduce the particle size to nanometer range and control the particle size distribution. In this study, LiCoO2 was synthesized using the sol–gel preparation method, which Lithium acetate and Cobalt acetate were used as reactants. The stoichiometric amounts of the reactants were dissolved in deionized water. The solutions were stirred for 30 hours using magnetic stirrer, followed by heating at 80°C under vigorous stirring until a viscous gel was formed. The as-formed gel was calcined at 700°C for 7 h under a room atmosphere. The structural and morphological analysis of LiCoO2 was characterized using X-ray diffraction and Scanning electron microscopy. The diffraction pattern of material can be indexed based on the α-NaFeO2 structure. The clear splitting of the hexagonal doublet of (006)/(102) and (108)/(110) in this patterns indicates materials are formed in a well-ordered hexagonal structure. No impurity phase can be seen in this range probably due to the homogeneous mixing of the cations in the precursor. Furthermore, SEM micrograph of the LiCoO2 shows the particle size distribution is almost uniform while particle size is between 0.3-0.5 microns. In conclusion, LiCoO2 powder was successfully synthesized using the sol–gel method. LiCoO2 showed a hexagonal crystal structure. The sample has been prepared clearly indicate the pure phase of LiCoO2. Meanwhile, the morphology of the sample showed that the particle size and size distribution of particles is almost uniform.

Keywords: cathode material, LiCoO2, lithium-ion rechargeable batteries, Sol-Gel method

Procedia PDF Downloads 357
680 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery

Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi

Abstract:

Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.

Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network

Procedia PDF Downloads 56
679 Creatine Associated with Resistance Training Increases Muscle Mass in the Elderly

Authors: Camila Lemos Pinto, Juliana Alves Carneiro, Patrícia Borges Botelho, João Felipe Mota

Abstract:

Sarcopenia, a syndrome characterized by progressive and generalized loss of skeletal muscle mass and strength, currently affects over 50 million people and increases the risk of adverse outcomes such as physical disability, poor quality of life and death. The aim of this study was to examine the efficacy of creatine supplementation associated with resistance training on muscle mass in the elderly. A 12-week, double blind, randomized, parallel group, placebo controlled trial was conducted. Participants were randomly allocated into one of the following groups: placebo with resistance training (PL+RT, n=14) and creatine supplementation with resistance training (CR + RT, n=13). The subjects from CR+RT group received 5 g/day of creatine monohydrate and the subjects from the PL+RT group were given the same dose of maltodextrin. Participants were instructed to ingest the supplement on non-training days immediately after lunch and on training days immediately after resistance training sessions dissolved in a beverage comprising 100 g of maltodextrin lemon flavored. Participants of both groups undertook a supervised exercise training program for 12 weeks (3 times per week). The subjects were assessed at baseline and after 12 weeks. The primary outcome was muscle mass, assessed by dual energy X-ray absorptiometry (DXA). The secondary outcome included diagnose participants with one of the three stages of sarcopenia (presarcopenia, sarcopenia and severe sarcopenia) by skeletal muscle mass index (SMI), handgrip strength and gait speed. CR+RT group had a significant increase in SMI and muscle (p<0.0001), a significant decrease in android and gynoid fat (p = 0.028 and p=0.035, respectively) and a tendency of decreasing in body fat (p=0.053) after the intervention. PL+RT only had a significant increase in SMI (p=0.007). The main finding of this clinical trial indicated that creatine supplementation combined with resistance training was capable of increasing muscle mass in our elderly cohort (p=0.02). In addition, the number of subjects diagnosed with one of the three stages of sarcopenia at baseline decreased in the creatine supplemented group in comparison with the placebo group (CR+RT, n=-3; PL+RT, n=0). In summary, 12 weeks of creatine supplementation associated with resistance training resulted in increases in muscle mass. This is the first research with elderly of both sexes that show the same increase in muscle mass with a minor quantity of creatine supplementation in a short period. Future long-term research should investigate the effects of these interventions in sarcopenic elderly.

Keywords: creatine, dietetic supplement, elderly, resistance training

Procedia PDF Downloads 460
678 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 251
677 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 47
676 Local Governance Systems for Value Chains' Promotion: A Chance for Rural Development in Tunisia

Authors: Neil Fourati

Abstract:

Collaboration between public and private stakeholders for agricultural development are today lacking in Tunisia. The last dictatorship witnessed by the country has deteriorated the necessary trust between the state and small farmers for the realization of development projects, in particular in the interior, disadvantaged regions of the country. These regions, where the youth unemployment rate is above 30%, have been the heart of the uprising that preceded the revolution. The transitional period that the country is going through since 2011 is an opportunity for the emergence of new governance systems in the context of the decentralization. The latter is recognized in the 2nd Tunisian Republic constitution as the basis of regional management. Civil society participation to the decision-making process is considered as a mean to identify measures that are more coherent with local populations’ needs. The development of agriculture and food value chains in rural areas is relevant within the framework of the implementation of new decisions systems that require public-private collaborations. These new systems can lead to actions in favor of improving living conditions of rural populations. The diverisification of activities around agriculture can be a solution for job creation and local value creation. The project for the promotion of sustainable agriculture and rural development in Tunisia has designed and implemented a multi-stakeholder dialogue process for the development of local value chains platforms in disadvantaged areas of the country. The platforms gather public and private organizations ; as well civil society organizations ; that intervene in a locality in relation to the production transformation or product’s commercialization. The role of these platforms is to formulate realize and evaluate collaborative actions or projects for the promotion of the concerned product and territory. The dialogue process steps allow to create the necessary collaboration conditions in order to promote viable collectivities, dynamic economies and healthy environments. Effectively, the dialogue process steps allow to identify the local leaders. These leaders recognize the development constraints and opportunities. They deal with key and gathering subjects around the collaborative projects or actions. They take common decisions in order to create effective coalitions for the implementation of common actions. The plateforms realize quick success so as to build trust. The project has supported the formulation of 22 collaborative projects. Seven priority collaborative projects have been realized. Each collaborative project includes 3 parts : the signature of the collaboration conventions between public and private organizations, investment in the relevant material in order to increase productivity and the quality of local and products and finally management and technical training in favour of producers’ organizations for the promotion of local products. The implementation of this process has enabled to enhance the capacities of collaboration between local actors : producers, traders, processors and support structures from public sector and civil society. It also allowed to improve the efficiency and relevance of actions and measures for agriculture and rural development programs. Thus, the process for the development of local value chain platform is a basis for sustainable development of agriculture.

Keywords: governance, public private collaboration, rural development, value chains

Procedia PDF Downloads 266
675 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics

Authors: Jingsi Li, Neil S. Ferguson

Abstract:

Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.

Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management

Procedia PDF Downloads 96
674 Urban Heat Island Intensity Assessment through Comparative Study on Land Surface Temperature and Normalized Difference Vegetation Index: A Case Study of Chittagong, Bangladesh

Authors: Tausif A. Ishtiaque, Zarrin T. Tasin, Kazi S. Akter

Abstract:

Current trend of urban expansion, especially in the developing countries has caused significant changes in land cover, which is generating great concern due to its widespread environmental degradation. Energy consumption of the cities is also increasing with the aggravated heat island effect. Distribution of land surface temperature (LST) is one of the most significant climatic parameters affected by urban land cover change. Recent increasing trend of LST is causing elevated temperature profile of the built up area with less vegetative cover. Gradual change in land cover, especially decrease in vegetative cover is enhancing the Urban Heat Island (UHI) effect in the developing cities around the world. Increase in the amount of urban vegetation cover can be a useful solution for the reduction of UHI intensity. LST and Normalized Difference Vegetation Index (NDVI) have widely been accepted as reliable indicators of UHI and vegetation abundance respectively. Chittagong, the second largest city of Bangladesh, has been a growth center due to rapid urbanization over the last several decades. This study assesses the intensity of UHI in Chittagong city by analyzing the relationship between LST and NDVI based on the type of land use/land cover (LULC) in the study area applying an integrated approach of Geographic Information System (GIS), remote sensing (RS), and regression analysis. Land cover map is prepared through an interactive supervised classification using remotely sensed data from Landsat ETM+ image along with NDVI differencing using ArcGIS. LST and NDVI values are extracted from the same image. The regression analysis between LST and NDVI indicates that within the study area, UHI is directly correlated with LST while negatively correlated with NDVI. It interprets that surface temperature reduces with increase in vegetation cover along with reduction in UHI intensity. Moreover, there are noticeable differences in the relationship between LST and NDVI based on the type of LULC. In other words, depending on the type of land usage, increase in vegetation cover has a varying impact on the UHI intensity. This analysis will contribute to the formulation of sustainable urban land use planning decisions as well as suggesting suitable actions for mitigation of UHI intensity within the study area.

Keywords: land cover change, land surface temperature, normalized difference vegetation index, urban heat island

Procedia PDF Downloads 261
673 The Perceptions of Patients with Osteoarthritis at a Public Community Rehabilitation Centre in the Cape Metropole for Using Digital Technology in Rehabilitation

Authors: Gabriela Prins, Quinette Louw, Dawn Ernstzen

Abstract:

Background: Access to rehabilitation services is a major challenge globally, especially in low-and-middle income countries (LMICs) where resources and infrastructure are extremely limited. Telerehabilitation (TR) has emerged in recent decades as a highly promising method to dramatically expand accessibility to rehabilitation services globally. TR provides rehabilitation care remotely using communication technologies such as video conferencing, smartphones, and internet-connected devices. This boosts accessibility to underprivileged regions and allows for greater flexibility for patients. Despite this, TR is hindered by several factors, including limited technological resources, high costs, lack of digital access, and the unavailability of healthcare systems, which are major barriers to widespread adoption among LMIC patients. These barriers have collectively hindered the potential implementation and adoption of TR services in LMICs healthcare settings. Adoption of TR will also require the buy-in of end users and limited information is known on the perspectives of the SA population. Aim: The study aimed to understand patients' perspectives regarding the use of digital technology as part of their OA rehabilitation at a public community healthcare centre in the Cape Metropole Area. Methods: A qualitative descriptive study design was used on 10 OA patients from a public community rehabilitation centre in South Africa. Data collection included semi-structured interviews and patient-reported outcome measures (PSFS, ASES-8, and EuroQol EQ-5D-5L) on functioning and quality of life. Transcribed interview data were coded in Atlas.ti. 22.2 and analyzed using thematic analysis. The results were narratively documented. Results: Four themes arose from the interviews. The themes were Telerehabilitation awareness (Use of Digital Technology Information Sources and Prior Experience with Technology /TR), Telerehabilitation Benefits (Access to healthcare providers, Access to educational information, Convenience, Time and Resource Efficiency and Facilitating Family Involvement), Telerehabilitation Implementation Considerations (Openness towards TR Implementation, Learning about TR and Technology, Therapeutic relationship, and Privacy) and Future use of Telerehabilitation (Personal Preference and TR for the next generation). The ten participants demonstrated limited awareness and exposure to TR, as well as minimal digital literacy and skills. Skepticism was shown when comparing the effectiveness of TR to in-person rehabilitation and valued physical interactions with health professionals. However, some recognized potential benefits of TR for accessibility, convenience, family involvement and improving community health in the long term. Willingness existed to try TR with sufficient training. Conclusion: With targeted efforts addressing identified barriers around awareness, technological literacy, clinician readiness and resource availability, perspectives on TR may shift positively from uncertainty towards endorsement of this expanding approach for simpler rehabilitation access in LMICs.

Keywords: digital technology, osteoarthritis, primary health care, telerehabilitation

Procedia PDF Downloads 55
672 Managing the Blue Economy and Responding to the Environmental Dimensions of a Transnational Governance Challenge

Authors: Ivy Chen XQ

Abstract:

This research places a much-needed focus on the conservation of the Blue Economy (BE) by focusing on the design and development of monitoring systems to track critical indicators on the status of the BE. In this process, local experiences provide an insight into important community issues, as well as the necessity to cooperate and collaborate in order to achieve sustainable options. Researchers worldwide and industry initiatives over the last decade show that the exploitation of marine resources has resulted in a significant decrease in the share of total allowable catch (TAC). The result has been strengthening law enforcement, yet the results have shown that problems were related to poor policies, a lack of understanding of over-exploitation, biological uncertainty and political pressures. This reality and other statistics that show a significant negative impact on the attainment of the Sustainable Development Goals (SDGs), warrant an emphasis on the development of national M&E systems, in order to provide evidence-based information, on the nature and scale of especially transnational fisheries crime and under-sea marine resources in the BE. In particular, a need exists to establish a compendium of relevant BE indicators to assess such impact against the SDGs by using selected SDG indicators for this purpose. The research methodology consists of ATLAS.ti qualitative approach and a case study will be developed of Illegal, unregulated and unreported (IUU) poaching and Illegal Wildlife Trade (IWT) as component of the BE as it relates to the case of abalone in southern Africa and Far East. This research project will make an original contribution through the analysis and comparative assessment of available indicators, in the design process of M&E systems and developing indicators and monitoring frameworks in order to track critical trends and tendencies on the status of the BE, to ensure specific objectives to be aligned with the indicators of the SDGs framework. The research will provide a set of recommendations to governments and stakeholders involved in such projects on lessons learned, as well as priorities for future research. The research findings will enable scholars, civil society institutions, donors and public servants, to understand the capability of the M&E systems, the importance of showing multi-level governance, in the coordination of information management, together with knowledge management (KM) and M&E at the international, regional, national and local levels. This coordination should focus on a sustainable development management approach, based on addressing socio-economic challenges to the potential and sustainability of BE, with an emphasis on ecosystem resilience, social equity and resource efficiency. This research and study focus are timely as the opportunities of the post-Covid-19 crisis recovery package will be grasped to set the economy on a path to sustainable development in line with the UN 2030 Agenda. The pandemic raises more awareness for the world to eliminate IUU poaching and illegal wildlife trade (IWT).

Keywords: Blue Economy (BE), transnational governance, Monitoring and Evaluation (M&E), Sustainable Development Goals (SDGs).

Procedia PDF Downloads 160
671 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 387
670 Empirical Superpave Mix-Design of Rubber-Modified Hot-Mix Asphalt in Railway Sub-Ballast

Authors: Fernando M. Soto, Gaetano Di Mino

Abstract:

The design of an unmodified bituminous mixture and three rubber-aggregate mixtures containing rubber-aggregate by a dry process (RUMAC) was evaluated, using an empirical-analytical approach based on experimental findings obtained in the laboratory with the volumetric mix design by gyratory compaction. A reference dense-graded bituminous sub-ballast mixture (3% of air voids and a bitumen 4% over the total weight of the mix), and three rubberized mixtures by dry process (1,5 to 3% of rubber by total weight and 5-7% of binder) were used applying the Superpave mix-design for a level 3 (high-traffic) design rail lines. The railway trackbed section analyzed was a granular layer of 19 cm compacted, while for the sub-ballast a thickness of 12 cm has been used. In order to evaluate the effect of increasing the specimen density (as a percent of its theoretical maximum specific gravity), in this article, are illustrated the results obtained after different comparative analysis into the influence of varying the binder-rubber percentages under the sub-ballast layer mix-design. This work demonstrates that rubberized blends containing crumb and ground rubber in bituminous asphalt mixtures behave at least similar or better than conventional asphalt materials. By using the same methodology of volumetric compaction, the densification curves resulting from each mixture have been studied. The purpose is to obtain an optimum empirical parameter multiplier of the number of gyrations necessary to reach the same compaction energy as in conventional mixtures. It has provided some experimental parameters adopting an empirical-analytical method, evaluating the results obtained from the gyratory-compaction of bituminous mixtures with an HMA and rubber-aggregate blends. An extensive integrated research has been carried out to assess the suitability of rubber-modified hot mix asphalt mixtures as a sub-ballast layer in railway underlayment trackbed. Design optimization of the mixture was conducted for each mixture and the volumetric properties analyzed. Also, an improved and complete manufacturing process, compaction and curing of these blends are provided. By adopting this increase-parameters of compaction, called 'beta' factor, mixtures modified with rubber with uniform densification and workability are obtained that in the conventional mixtures. It is found that considering the usual bearing capacity requirements in rail track, the optimal rubber content is 2% (by weight) or 3.95% (by volumetric substitution) and a binder content of 6%.

Keywords: empirical approach, rubber-asphalt, sub-ballast, superpave mix-design

Procedia PDF Downloads 350
669 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 335
668 Immobilization of Superoxide Dismutase Enzyme on Layered Double Hydroxide Nanoparticles

Authors: Istvan Szilagyi, Marko Pavlovic, Paul Rouster

Abstract:

Antioxidant enzymes are the most efficient defense systems against reactive oxygen species, which cause severe damage in living organisms and industrial products. However, their supplementation is problematic due to their high sensitivity to the environmental conditions. Immobilization on carrier nanoparticles is a promising research direction towards the improvement of their functional and colloidal stability. In that way, their applications in biomedical treatments and manufacturing processes in the food, textile and cosmetic industry can be extended. The main goal of the present research was to prepare and formulate antioxidant bionanocomposites composed of superoxide dismutase (SOD) enzyme, anionic clay (layered double hydroxide, LDH) nanoparticle and heparin (HEP) polyelectrolyte. To characterize the structure and the colloidal stability of the obtained compounds in suspension and solid state, electrophoresis, dynamic light scattering, transmission electron microscopy, spectrophotometry, thermogravimetry, X-ray diffraction, infrared and fluorescence spectroscopy were used as experimental techniques. LDH-SOD composite was synthesized by enzyme immobilization on the clay particles via electrostatic and hydrophobic interactions, which resulted in a strong adsorption of the SOD on the LDH surface, i.e., no enzyme leakage was observed once the material was suspended in aqueous solutions. However, the LDH-SOD showed only limited resistance against salt-induced aggregation and large irregularly shaped clusters formed during short term interval even at lower ionic strengths. Since sufficiently high colloidal stability is a key requirement in most of the applications mentioned above, the nanocomposite was coated with HEP polyelectrolyte to develop highly stable suspensions of primary LDH-SOD-HEP particles. HEP is a natural anticoagulant with one of the highest negative line charge density among the known macromolecules. The experimental results indicated that it strongly adsorbed on the oppositely charged LDH-SOD surface leading to charge inversion and to the formation of negatively charged LDH-SOD-HEP. The obtained hybrid materials formed stable suspension even under extreme conditions, where classical colloid chemistry theories predict rapid aggregation of the particles and unstable suspensions. Such a stabilization effect originated from electrostatic repulsion between the particles of the same sign of charge as well as from steric repulsion due to the osmotic pressure raised during the overlap of the polyelectrolyte chains adsorbed on the surface. In addition, the SOD enzyme kept its structural and functional integrity during the immobilization and coating processes and hence, the LDH-SOD-HEP bionanocomposite possessed excellent activity in decomposition of superoxide radical anions, as revealed in biochemical test reactions. In conclusion, due to the improved colloidal stability and the good efficiency in scavenging superoxide radical ions, the developed enzymatic system is a promising antioxidant candidate for biomedical or other manufacturing processes, wherever the aim is to decompose reactive oxygen species in suspensions.

Keywords: clay, enzyme, polyelectrolyte, formulation

Procedia PDF Downloads 250
667 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA

Authors: Marek Dosbaba

Abstract:

Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.

Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data

Procedia PDF Downloads 93
666 Structural Performance of Mechanically Connected Stone Panels under Cyclic Loading: Application to Aesthetic and Environmental Building Skin Design

Authors: Michel Soto Chalhoub

Abstract:

Building designers in the Mediterranean region and other parts of the world utilize natural stone panels on the exterior façades as skin cover. This type of finishing is not only intended for aesthetic reasons but also environmental. The stone, since the earliest ages of civilization, has been used in construction and to-date some of the most appealing buildings owe their beauty to stone finishing. The stone also provides warmth in winter and freshness in summer as it moderates heat transfer and absorbs radiation. However, as structural codes became increasingly stringent about the dynamic performance of buildings, it became essential to study the performance of stone panels under cyclic loading – a condition that arises under the building is subjected to wind or earthquakes. The present paper studies the performance of stone panels using mechanical connectors when subjected to load reversal. In this paper, we present a theoretical model that addresses modes of failure in the steel connectors, by yield, and modes of failure in the stone, by fracture. Then we provide an experimental set-up and test results for rectangular stone panels of varying thickness. When the building is subjected to an earthquake, its rectangular panels within the structural system are subjected to shear deformations, which in turn impart stress into the stone cover. Rectangular stone panels, which typically range from 40cmx80cm to 60cmx120cm, need to be designed to withstand transverse loading from the direct application of lateral loads, and to withstand simultaneously in-plane loading (membrane stress) caused by inter-story drift and overall building lateral deflection. Results show correlation between the theoretical model which we derive from solid mechanics fundamentals and the experimental results, and lead to practical design recommendations. We find that for panel thickness below a certain threshold, it is more advantageous to utilize structural adhesive materials to connect stone panels to the main structural system of the building. For larger panel thicknesses, it is recommended to utilize mechanical connectors with special detailing to ensure a minimum level of ductility and energy dissipation.

Keywords: solid mechanics, cyclic loading, mechanical connectors, natural stone, seismic, wind, building skin

Procedia PDF Downloads 245
665 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring

Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield

Abstract:

Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.

Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2

Procedia PDF Downloads 207
664 Production of Bio-Composites from Cocoa Pod Husk for Use in Packaging Materials

Authors: L. Kanoksak, N. Sukanya, L. Napatsorn, T. Siriporn

Abstract:

A growing population and demand for packaging are driving up the usage of natural resources as raw materials in the pulp and paper industry. Long-term effects of environmental is disrupting people's way of life all across the planet. Finding pulp sources to replace wood pulp is therefore necessary. To produce wood pulp, various other potential plants or plant parts can be employed as substitute raw materials. For example, pulp and paper were made from agricultural residue that mainly included pulp can be used in place of wood. In this study, cocoa pod husks were an agricultural residue of the cocoa and chocolate industries. To develop composite materials to replace wood pulp in packaging materials. The paper was coated with polybutylene adipate-co-terephthalate (PBAT). By selecting and cleaning fresh cocoa pod husks, the size was reduced. And the cocoa pod husks were dried. The morphology and elemental composition of cocoa pod husks were studied. To evaluate the mechanical and physical properties, dried cocoa husks were extracted using the soda-pulping process. After selecting the best formulations, paper with a PBAT bioplastic coating was produced on a paper-forming machine Physical and mechanical properties were studied. By using the Field Emission Scanning Electron Microscope/Energy Dispersive X-Ray Spectrometer (FESEM/EDS) technique, the structure of dried cocoa pod husks showed the main components of cocoa pod husks. The appearance of porous has not been found. The fibers were firmly bound for use as a raw material for pulp manufacturing. Dry cocoa pod husks contain the major elements carbon (C) and oxygen (O). Magnesium (Mg), potassium (K), and calcium (Ca) were minor elements that were found in very small levels. After that cocoa pod husks were removed from the soda-pulping process. It found that the SAQ5 formula produced pulp yield, moisture content, and water drainage. To achieve the basis weight by TAPPI T205 sp-02 standard, cocoa pod husk pulp and modified starch were mixed. The paper was coated with bioplastic PBAT. It was produced using bioplastic resin from the blown film extrusion technique. It showed the contact angle, dispersion component and polar component. It is an effective hydrophobic material for rigid packaging applications.

Keywords: cocoa pod husks, agricultural residue, composite material, rigid packaging

Procedia PDF Downloads 56
663 Using Business Simulations and Game-Based Learning for Enterprise Resource Planning Implementation Training

Authors: Carin Chuang, Kuan-Chou Chen

Abstract:

An Enterprise Resource Planning (ERP) system is an integrated information system that supports the seamless integration of all the business processes of a company. Implementing an ERP system can increase efficiencies and decrease the costs while helping improve productivity. Many organizations including large, medium and small-sized companies have already adopted an ERP system for decades. Although ERP system can bring competitive advantages to organizations, the lack of proper training approach in ERP implementation is still a major concern. Organizations understand the importance of ERP training to adequately prepare managers and users. The low return on investment, however, for the ERP training makes the training difficult for knowledgeable workers to transfer what is learned in training to the jobs at workplace. Inadequate and inefficient ERP training limits the value realization and success of an ERP system. That is the need to call for a profound change and innovation for ERP training in both workplace at industry and the Information Systems (IS) education in academia. The innovated ERP training approach can improve the users’ knowledge in business processes and hands-on skills in mastering ERP system. It also can be instructed as educational material for IS students in universities. The purpose of the study is to examine the use of ERP simulation games via the ERPsim system to train the IS students in learning ERP implementation. The ERPsim is the business simulation game developed by ERPsim Lab at HEC Montréal, and the game is a real-life SAP (Systems Applications and Products) ERP system. The training uses the ERPsim system as the tool for the Internet-based simulation games and is designed as online student competitions during the class. The competitions involve student teams with the facilitation of instructor and put the students’ business skills to the test via intensive simulation games on a real-world SAP ERP system. The teams run the full business cycle of a manufacturing company while interacting with suppliers, vendors, and customers through sending and receiving orders, delivering products and completing the entire cash-to-cash cycle. To learn a range of business skills, student needs to adopt individual business role and make business decisions around the products and business processes. Based on the training experiences learned from rounds of business simulations, the findings show that learners have reduced risk in making mistakes that help learners build self-confidence in problem-solving. In addition, the learners’ reflections from their mistakes can speculate the root causes of the problems and further improve the efficiency of the training. ERP instructors teaching with the innovative approach report significant improvements in student evaluation, learner motivation, attendance, engagement as well as increased learner technology competency. The findings of the study can provide ERP instructors with guidelines to create an effective learning environment and can be transferred to a variety of other educational fields in which trainers are migrating towards a more active learning approach.

Keywords: business simulations, ERP implementation training, ERPsim, game-based learning, instructional strategy, training innovation

Procedia PDF Downloads 122
662 Standardizing and Achieving Protocol Objectives for ChestWall Radiotherapy Treatment Planning Process using an O-ring Linac in High-, Low- and Middle-income Countries

Authors: Milton Ixquiac, Erick Montenegro, Francisco Reynoso, Matthew Schmidt, Thomas Mazur, Tianyu Zhao, Hiram Gay, Geoffrey Hugo, Lauren Henke, Jeff Michael Michalski, Angel Velarde, Vicky de Falla, Franky Reyes, Osmar Hernandez, Edgar Aparicio Ruiz, Baozhou Sun

Abstract:

Purpose: Radiotherapy departments in low- and middle-income countries (LMICs) like Guatemala have recently introduced intensity-modulated radiotherapy (IMRT). IMRT has become the standard of care in high-income countries (HIC) due to reduced toxicity and improved outcomes in some cancers. The purpose of this work is to show the agreement between the dosimetric results shown in the Dose Volume Histograms (DVH) to the objectives proposed in the adopted protocol. This is the initial experience with an O-ring Linac. Methods and Materials: An O-Linac Linac was installed at our clinic in Guatemala in 2019 and has been used to treat approximately 90 patients daily with IMRT. This Linac is a completely Image Guided Device since to deliver each radiotherapy session must take a Mega Voltage Cone Beam Computerized Tomography (MVCBCT). In each MVCBCT, the Linac deliver 9 UM, and they are taken into account while performing the planning. To start the standardization, the TG263 was employed in the nomenclature and adopted a hypofractionated protocol to treat ChestWall, including supraclavicular nodes achieving 40.05Gy in 15 fractions. The planning was developed using 4 semiarcs from 179-305 degrees. The planner must create optimization volumes for targets and Organs at Risk (OARs); the difficulty for the planner was the dose base due to the MVCBCT. To evaluate the planning modality, we used 30 chestwall cases. Results: The plans created manually achieve the protocol objectives. The protocol objectives are the same as the RTOG1005, and the DHV curves look clinically acceptable. Conclusions: Despite the O-ring Linac doesn´t have the capacity to obtain kv images, the cone beam CT was created using MV energy, the dose delivered by the daily image setup process still without affect the dosimetric quality of the plans, and the dose distribution is acceptable achieving the protocol objectives.

Keywords: hypofrationation, VMAT, chestwall, radiotherapy planning

Procedia PDF Downloads 98
661 Sustainable Production of Algae through Nutrient Recovery in the Biofuel Conversion Process

Authors: Bagnoud-Velásquez Mariluz, Damergi Eya, Grandjean Dominique, Frédéric Vogel, Ludwig Christian

Abstract:

The sustainability of algae to biofuel processes is seriously affected by the energy intensive production of fertilizers. Large amounts of nitrogen and phosphorus are required for a large-scale production resulting in many cases in a negative impact of the limited mineral resources. In order to meet the algal bioenergy opportunity it appears crucial the promotion of processes applying a nutrient recovery and/or making use of renewable sources including waste. Hydrothermal (HT) conversion is a promising and suitable technology for microalgae to generate biofuels. Besides the fact that water is used as a “green” reactant and solvent and that no biomass drying is required, the technology offers a great potential for nutrient recycling. This study evaluated the possibility to treat the water HT effluent by the growth of microalgae while producing renewable algal biomass. As already demonstrated in previous works by the authors, the HT aqueous product besides having N, P and other important nutrients, presents a small fraction of organic compounds rarely studied. Therefore, extracted heteroaromatic compounds in the HT effluent were the target of the present research; they were profiled using GC-MS and LC-MS-MS. The results indicate the presence of cyclic amides, piperazinediones, amines and their derivatives. The most prominent nitrogenous organic compounds (NOC’s) in the extracts were carefully examined by their effect on microalgae, namely 2-pyrrolidinone and β-phenylethylamine (β-PEA). These two substances were prepared at three different concentrations (10, 50 and 150 ppm). This toxicity bioassay used three different microalgae strains: Phaeodactylum tricornutum, Chlorella sorokiniana and Scenedesmus vacuolatus. The confirmed IC50 was for all cases ca. 75ppm. Experimental conditions were set up for the growth of microalgae in the aqueous phase by adjusting the nitrogen concentration (the key nutrient for algae) to fit that one established for a known commercial medium. The values of specific NOC’s were lowered at concentrations of 8.5 mg/L 2-pyrrolidinone; 1mg/L δ-valerolactam and 0.5 mg/L β-PEA. The growth with the diluted HT solution was kept constant with no inhibition evidence. An additional ongoing test is addressing the possibility to apply an integrated water cleanup step making use of the existent hydrothermal catalytic facility.

Keywords: hydrothermal process, microalgae, nitrogenous organic compounds, nutrient recovery, renewable biomass

Procedia PDF Downloads 393
660 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 245
659 Effects of Soaking of Maize on the Viscosity of Masa and Tortilla Physical Properties at Different Nixtamalization Times

Authors: Jorge Martínez-Rodríguez, Esther Pérez-Carrillo, Diana Laura Anchondo Álvarez, Julia Lucía Leal Villarreal, Mariana Juárez Dominguez, Luisa Fernanda Torres Hernández, Daniela Salinas Morales, Erick Heredia-Olea

Abstract:

Maize tortillas are a staple food in Mexico which are mostly made by nixtamalization, which includes the cooking and steeping of maize kernels in alkaline conditions. The cooking step in nixtamalization demands a lot of energy and also generates nejayote, a water pollutant, at the end of the process. The aim of this study was to reduce the cooking time by adding a maize soaking step before nixtamalization while maintaining the quality properties of masa and tortillas. Maize kernels were soaked for 36 h to increase moisture up to 36%. Then, the effect of different cooking times (0, 5, 10, 15, 20, 20, 25, 30, 35, 45-control and 50 minutes) was evaluated on viscosity profile (RVA) of masa to select the treatments with a profile similar or equal to control. All treatments were left steeping overnight and had the same milling conditions. Treatments selected were 20- and 25-min cooking times which had similar values for pasting temperature (79.23°C and 80.23°C), Maximum Viscosity (105.88 Cp and 96.25 Cp) and Final Viscosity (188.5 Cp and 174 Cp) to those of 45 min-control (77.65 °C, 110.08 Cp, and 186.70 Cp, respectively). Afterward, tortillas were produced with the chosen treatments (20 and 25 min) and for control, then were analyzed for texture, damage starch, colorimetry, thickness, and average diameter. Colorimetric analysis of tortillas only showed significant differences for yellow/blue coordinates (b* parameter) at 20 min (0.885), unlike the 25-minute treatment (1.122). Luminosity (L*) and red/green coordinates (a*) showed no significant differences from treatments with respect control (69.912 and 1.072, respectively); however, 25 minutes was closer in both parameters (73.390 and 1.122) than 20 minutes (74.08 and 0.884). For the color difference, (E), the 25 min value (3.84) was the most similar to the control. However, for tortilla thickness and diameter, the 20-minute with 1.57 mm and 13.12 cm respectively was closer to those of the control (1.69 mm and 13.86 cm) although smaller to it. On the other hand, the 25 min treatment tortilla was smaller than both 20 min and control with 1.51 mm thickness and 13.590 cm diameter. According to texture analyses, there was no difference in terms of stretchability (8.803-10.308 gf) and distance for the break (95.70-126.46 mm) among all treatments. However, for the breaking point, all treatments (317.1 gf and 276.5 gf for 25 and 20- min treatment, respectively) were significantly different from the control tortilla (392.2 gf). Results suggest that by adding a soaking step and reducing cooking time by 25 minutes, masa and tortillas obtained had similar functional and textural properties to the traditional nixtamalization process.

Keywords: tortilla, nixtamalization, corn, lime cooking, RVA, colorimetry, texture, masa rheology

Procedia PDF Downloads 149
658 Identifying Physical and Psycho-Social Issues Facing Breast Cancer Survivors after Definitive Treatment for Early Breast Cancer: A Nurse-Led Clinic Model

Authors: A. Dean, M. Pitcher, L. Storer, K. Shanahan, I. Rio, B. Mann

Abstract:

Purpose: Breast cancer survivors are at risk of specific physical and psycho-social issues, such as arm swelling, fatigue, and depression. Firstly, we investigate symptoms reported by Australia breast cancer survivors upon completion of definitive treatment. Secondly, we evaluate the appropriateness and effectiveness of a multi-centre pilot program nurse-led clinic to identify these issues and make timely referrals to available services. Methods: Patients post-definitive treatment (excluding ongoing hormonal therapy) for early breast cancer or ductal carcinoma in situ were invited to participate. An hour long appointment with a breast care nurse (BCN) was scheduled. In preparation, patients completed validated quality-of-life surveys (FACT-B, Menopause Rating Scale, Distress Thermometer). During the appointment, issues identified in the surveys were addressed and referrals to appropriate services arranged. Results: 183 of 274 (67%) eligible patients attended a nurse-led clinic. Mean age 56.8 years (range 29-87 years), 181/183 women, 105/183 post-menopausal. 96 (55%) participants reported significant level of distress; 31 (18%) participants reported extreme distress or depression. Distress stemmed from a lack of energy (56/175); poor quality of sleep (50/176); inability to work or participate in household activities (35/172) and problems with sex life (28/89). 166 referrals were offered; 94% of patients accepted the referrals. 65% responded to a follow-up survey: the majority of women either strongly agreed or agreed that the BCN was overwhelmingly supportive, helpful in making referrals, and compassionate towards them. 39% reported making lifestyle changes as a result of the BCN. Conclusion: Breast cancer survivors experience a unique set of challenges, including low mood, difficulty sleeping, problems with sex life and fear of disease recurrence. The nurse-led clinic model is an appropriate and effective method to ensure physical and psycho-social issues are identified and managed in a timely manner. This model empowers breast cancer survivors with information about their diagnosis and available services.

Keywords: early breast cancer, survivorship, breast care nursing, oncology nursing and cancer care

Procedia PDF Downloads 386
657 Cultural Competence in Palliative Care

Authors: Mariia Karizhenskaia, Tanvi Nandani, Ali Tafazoli Moghadam

Abstract:

Hospice palliative care (HPC) is one of the most complicated philosophies of care in which physical, social/cultural, and spiritual aspects of human life are intermingled with an undeniably significant role in every aspect. Among these dimensions of care, culture possesses an outstanding position in the process and goal determination of HPC. This study shows the importance of cultural elements in the establishment of effective and optimized structures of HPC in the Canadian healthcare environment. Our systematic search included Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 1998 to 2023 to identify recent national literature connecting culture and palliative care delivery. The most frequently presented feature among the articles is the role of culture in the efficiency of the HPC. It has been shown frequently that including the culturespecific parameters of each nation in this system of care is vital for its success. On the other hand, ignorance about the exclusive cultural trends in a specific location has been accompanied by significant failure rates. Accordingly, implementing a culture-wise adaptable approach is mandatory for multicultural societies. The following outcome of research studies in this field underscores the importance of culture-oriented education for healthcare staff. Thus, all the practitioners involved in HPC will recognize the importance of traditions, religions, and social habits for processing the care requirements. Cultural competency training is a telling sample of the establishment of this strategy in health care that has come to the aid of HPC in recent years. Another complexity of the culturized HPC nowadays is the long-standing issue of racialization. Systematic and subconscious deprivation of minorities has always been an adversity of advanced levels of care. The last part of the constellation of our research outcomes is comprised of the ethical considerations of culturally driven HPC. This part is the most sophisticated aspect of our topic because almost all the analyses, arguments, and justifications are subjective. While there was no standard measure for ethical elements in clinical studies with palliative interventions, many research teams endorsed applying ethical principles for all the involved patients. Notably, interpretations and projections of ethics differ in varying cultural backgrounds. Therefore, healthcare providers should always be aware of the most respectable methodologies of HPC on a case-by-case basis. Cultural training programs have been utilized as one of the main tactics to improve the ability of healthcare providers to address the cultural needs and preferences of diverse patients and families. In this way, most of the involved health care practitioners will be equipped with cultural competence. Considerations for ethical and racial specifications of the clients of this service will boost the effectiveness and fruitfulness of the HPC. Canadian society is a colorful compilation of multiple nationalities; accordingly, healthcare clients are diverse, and this divergence is also translated into HPC patients. This fact justifies the importance of studying all the cultural aspects of HPC to provide optimal care on this enormous land.

Keywords: cultural competence, end-of-life care, hospice, palliative care

Procedia PDF Downloads 58
656 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 204
655 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites

Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze

Abstract:

Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.

Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering

Procedia PDF Downloads 359
654 Intervention To Prevent Infections And Reinfections With Intestinal Parasites In People Living With Human Immunodeficiency Virus In Some Parts Of Eastern Cape, South Africa

Authors: Ifeoma Anozie, Teka Apalata, Dominic Abaver

Abstract:

Introduction: Despite use of Anti-retroviral therapy to reduce the incidence of opportunistic infections among HIV/AIDS patients, rapid episodes of re-infection after deworming are still common occurrences because pharmaceutical intervention alone does not prevent reinfection. Unsafe water and inadequate personal hygiene and parasitic infections are widely expected to accelerate the progression of HIV infection. This is because the chronic immunosuppression of HIV infection encourages susceptibility to opportunistic (including parasitic) infections which is linked to CD4+ cell count of <200 cells/μl. Intestinal parasites such as G. intestinalis and Entamoeba spp are ubiquitous protozoa that remain infectious over a long time in an environment and show resistance to standard disinfection. To control re-infection, the social factors that underpin the prevention need to be controlled. This study aims at prevention of intestinal parasites in people living with HIV/AIDS by using a treatment, hygiene education and sanitation (THEdS) bundle approach. Methods: This study was conducted in four clinics (Ngangelizwe health centre, Tsolo gateway clinic, Idutywa health centre and Nqamakwe health centre) across the seven districts in Eastern cape, South Africa. The four clinics were divided in two: experimental and control, for the purpose of intervention. Data was collected from March 2019 to February 2020. Six hundred participants were screened for intestinal parasitic infections. Stool samples were collected and analysed twice: before (Pre-test infection screening) and after (Post-test re-infection) THEdS bundle intervention. The experimental clinics received full intervention package, which include therapeutic treatment, health education on personal hygiene and sanitation training, while the control clinics received only therapeutic treatment for those found with intestinal parasitic infections. Results: Baseline prevalence of Intestinal Parasites isolated shows 12 intestinal parasites with overall frequency of 65, with Ascaris lumbricoides having most frequency (44.6%). The intervention had a cure rate of 60%, with odd ratio of 1.42, which indicates that the intervention group is 1.42 times more likely of parasite clearing as compared to the control group. The relative risk ratio of 1.17 signifies that there is 1.17 times more likelihood to clear intestinal parasite if there no intervention. Discussion and conclusion: Infection with multiple parasites can cause health defects, especially among HIV/AIDS patients. Efficiency of some HIV vaccines in HIV/AIDS patients is affected because treatment of re-infection amplifies drug resistance, affects the efficacy of the front-line drugs, and still permits transmission. In South Africa, treatment of intestinal parasites is usually offered to clinic attending HIV/AIDS patients upon suspicion but not as a mandate for patients being initiated into Antiretroviral (ART) program. The effectiveness of THEdS bundle advocates for inclusiveness of mandatory screening for intestinal parasitic infections among attendees of HIV/Aids clinics on regular basis.

Keywords: cure rate, , HIV/AIDS patients, intestinal parasites, intervention studies, reinfection rate

Procedia PDF Downloads 59