Search results for: safety standard operation procedure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12277

Search results for: safety standard operation procedure

577 Vertebral Artery Dissection Complicating Pregnancy and Puerperium: Case Report and Review of the Literature

Authors: N. Reza Pour, S. Chuah, T. Vo

Abstract:

Background: Vertebral artery dissection (VAD) is a rare complication of pregnancy. It can occur spontaneously or following a traumatic event. The pathogenesis is unclear. Predisposing factors include chronic hypertension, Marfan’s syndrome, fibromuscular dysplasia, vasculitis and cystic medial necrosis. Physiological changes of pregnancy have also been proposed as potential mechanisms of injury to the vessel wall. The clinical presentation varies and it can present as a headache, neck pain, diplopia, transient ischaemic attack, or an ischemic stroke. Isolated cases of VAD in pregnancy and puerperium have been reported in the literature. One case was found to have posterior circulation stroke as a result of bilateral VAD and labour was induced at 37 weeks gestation for preeclampsia. Another patient at 38 weeks with severe neck pain that persisted after induction for elevated blood pressure and arteriography showed right VAD postpartum. A single case of lethal VAD in pregnancy with subsequent massive subarachnoid haemorrhage has been reported which was confirmed by the autopsy. Case Presentation: We report two cases of vertebral artery dissection in pregnancy. The first patient was a 32-year-old primigravida presented at the 38th week of pregnancy with the onset of early labour and blood pressure (BP) of 130/70 on arrival. After 2 hours, the patient developed a severe headache with blurry vision and BP was 238/120. Despite treatment with an intravenous antihypertensive, she had eclamptic fit. Magnesium solfate was started and Emergency Caesarean Section was performed under the general anaesthesia. On the second day after the operation, she developed left-sided neck pain. Magnetic Resonance Imaging (MRI) angiography confirmed a short segment left vertebral artery dissection at the level of C3. The patient was treated with aspirin and remained stable without any neurological deficit. The second patient was a 33-year-old primigavida who was admitted to the hospital at 36 weeks gestation with BP of 155/105, constant headache and visual disturbances. She was medicated with an oral antihypertensive agent. On day 4, she complained of right-sided neck pain. MRI angiogram revealed a short segment dissection of the right vertebral artery at the C2-3 level. Pregnancy was terminated on the same day with emergency Caesarean Section and anticoagulation was started subsequently. Post-operative recovery was complicated by rectus sheath haematoma requiring evacuation. She was discharged home on Aspirin without any neurological sequelae. Conclusion: Because of collateral circulation, unilateral vertebral artery dissections may go unrecognized and may be more common than suspected. The outcome for most patients is benign, reflecting the adequacy of the collateral circulation in young patients. Spontaneous VAD is usually treated with anticoagulation or antiplatelet therapy for a minimum of 3-6 months to prevent future ischaemic events, allowing the dissection to heal on its own. We had two cases of VAD in the context of hypertensive disorders of pregnancy with an acceptable outcome. A high level of vigilance is required particularly with preeclamptic patients presenting with head/neck pain to allow an early diagnosis. This is as we hypothesize, early and aggressive management of vertebral artery dissection may potentially prevent further complications.

Keywords: eclampsia, preeclampsia, pregnancy, Vertebral Artery Dissection

Procedia PDF Downloads 278
576 Integration of Building Information Modeling Framework for 4D Constructability Review and Clash Detection Management of a Sewage Treatment Plant

Authors: Malla Vijayeta, Y. Vijaya Kumar, N. Ramakrishna Raju, K. Satyanarayana

Abstract:

Global AEC (architecture, engineering, and construction) industry has been coined as one of the most resistive domains in embracing technology. Although this digital era has been inundated with software tools like CAD, STADD, CANDY, Microsoft Project, Primavera etc. the key stakeholders have been working in siloes and processes remain fragmented. Unlike the yesteryears’ simpler project delivery methods, the current projects are of fast-track, complex, risky, multidisciplinary, stakeholder’s influential, statutorily regulative etc. pose extensive bottlenecks in preventing timely completion of projects. At this juncture, a paradigm shift surfaced in construction industry, and Building Information Modeling, aka BIM, has been a panacea to bolster the multidisciplinary teams’ cooperative and collaborative work leading to productive, sustainable and leaner project outcome. Building information modeling has been integrative, stakeholder engaging and centralized approach in providing a common platform of communication. A common misconception that BIM can be used for building/high rise projects in Indian Construction Industry, while this paper discusses of the implementation of BIM processes/methodologies in water and waste water industry. It elucidates about BIM 4D planning and constructability reviews of a Sewage Treatment Plant in India. Conventional construction planning and logistics management involves a blend of experience coupled with imagination. Even though the excerpts or judgments or lessons learnt gained from veterans might be predictive and helpful, but the uncertainty factor persists. This paper shall delve about the case study of real time implementation of BIM 4D planning protocols for one of the Sewage Treatment Plant of Dravyavati River Rejuvenation Project in India and develops a Time Liner to identify logistics planning and clash detection. With this BIM processes, we shall find that there will be significant reduction of duplication of tasks and reworks. Also another benefit achieved will be better visualization and workarounds during conception stage and enables for early involvement of the stakeholders in the Project Life cycle of Sewage Treatment Plant construction. Moreover, we have also taken an opinion poll of the benefits accrued utilizing BIM processes versus traditional paper based communication like 2D and 3D CAD tools. Thus this paper concludes with BIM framework for Sewage Treatment Plant construction which will achieve optimal construction co-ordination advantages like 4D construction sequencing, interference checking, clash detection checking and resolutions by primary engagement of all key stakeholders thereby identifying potential risks and subsequent creation of risk response strategies. However, certain hiccups like hesitancy in adoption of BIM technology by naïve users and availability of proficient BIM trainers in India poses a phenomenal impediment. Hence the nurture of BIM processes from conception, construction and till commissioning, operation and maintenance along with deconstruction of a project’s life cycle is highly essential for Indian Construction Industry in this digital era.

Keywords: integrated BIM workflow, 4D planning with BIM, building information modeling, clash detection and visualization, constructability reviews, project life cycle

Procedia PDF Downloads 122
575 Endemic Asteraceae from Mauritius Islands as Potential Phytomedicines

Authors: S.Kauroo, J. Govinden Soulange, D. Marie

Abstract:

Psiadia species from the Asteraceae are traditionally used in the folk medicine of Mauritius to treat cutaneous and bronchial infections. The present study aimed at validating the phytomedicinal properties of the selected species from the Asteraceae family, namely Psiadia arguta, Psiadia viscosa, Psiadia lithospermifolia, and Distephanus populifolius. Dried hexane, ethyl acetate, and methanol leaf extracts were studied for their antioxidant properties using the DPPH (1, 1-diphenyl-2-picryl-hydrazyl), FRAP (Ferric Reducing Ability of Plasma), and Deoxyribose assays. Antibacterial activity against human pathogenic bacteria namely Escherichia coli (ATCC 27853), Staphylococcus aureus (ATCC 29213), Enterococcus faecalis (ATCC 29212), Klebsiella pneumonia (ATCC27853), Pseudomonas aeruginosa (ATCC 27853), and Bacillus cereus (ATCC 11778) was measured using the broth microdilution assay. Qualitative phytochemical screening using standard methods revealed the presence of coumarins, tannins, leucoanthocyanins, and steroids in all the tested extracts. The measured phenolics level of the selected plant extracts varied from 24.0 to 231.6 mg GAE/g with the maximum level in methanol extracts in all four species. The highest flavonoids and proanthocyanidins content was noted in Psiadia arguta methanolic extracts with 65.7±1.8 mg QE/g and 5.1±0.0 mg CAT/g dry weight (DW) extract, respectively. The maximum free radical scavenging activity was measured in Psiadia arguta methanol and ethyl acetate extracts with IC50 11.3±0.2 and 11.6± 0.2 µg/mL, respectively and followed by Distephanus populifolius methanol extracts with an IC50 of 11.3± 0.8 µg/mL. The maximum ferric reducing antioxidant potential was noted in Psiadia lithospermifolia methanol extracts with a FRAP value of 18.8 ± 0.4 µmol Fe2+/L/g DW. The antioxidant capacity based on DPPH and Deoxyribose values were negatively related to total phenolics, flavonoid and proanthocyanidins content while the ferric reducing antioxidant potential were strongly correlated to total phenolics, flavonoid and proanthocyanidins content. All four species exhibited antimicrobial activity against the tested bacteria (both Gram-negative and Gram-positive). Interestingly, the hexane and ethyl acetate extracts of Psiadia viscosa and Psiadia lithospermifolia were more active than the control antibiotic Chloramphenicol. The Minimum inhibitory concentration (MIC) values for hexane and ethyl acetate extracts of Psiadia viscosa and Psiadia lithospermifolia against the tested bacteria ranged from (62.5 to 500 µg/ml). These findings validate the use of these tested Asteraceae in the traditional medicine of Mauritius and also highlight their pharmaceutical potential as prospective phytomedicines.

Keywords: antibacterial, antioxidant, DPPH, flavonoids, FRAP, Psiadia spp

Procedia PDF Downloads 531
574 Addressing the Gap in Health and Wellbeing Evidence for Urban Real Estate Brownfield Asset Management Social Needs and Impact Analysis Using Systems Mapping Approach

Authors: Kathy Pain, Nalumino Akakandelwa

Abstract:

The study explores the potential to fill a gap in health and wellbeing evidence for purposeful urban real estate asset management to make investment a powerful force for societal good. Part of a five-year programme investigating the root causes of unhealthy urban development funded by the United Kingdom Prevention Research Partnership (UKPRP), the study pilots the use of a systems mapping approach to identify drivers and barriers to the incorporation of health and wellbeing evidence in urban brownfield asset management decision-making. Urban real estate not only provides space for economic production but also contributes to the quality of life in the local community. Yet market approaches to urban land use have, until recently, insisted that neo-classical technology-driven efficient allocation of economic resources should inform acquisition, operational, and disposal decisions. Buildings in locations with declining economic performance have thus been abandoned, leading to urban decay. Property investors are recognising the inextricable connection between sustainable urban production and quality of life in local communities. The redevelopment and operation of brownfield assets recycle existing buildings, minimising embodied carbon emissions. It also retains established urban spaces with which local communities identify and regenerate places to create a sense of security, economic opportunity, social interaction, and quality of life. Social implications of urban real estate on health and wellbeing and increased adoption of benign sustainability guidance in urban production are driving the need to consider how they affect brownfield real estate asset management decisions. Interviews with real estate upstream decision-makers in the study, find that local social needs and impact analysis is becoming a commercial priority for large-scale urban real estate development projects. Evidence of the social value-added of proposed developments is increasingly considered essential to secure local community support and planning permissions, and to attract sustained inward long-term investment capital flows for urban projects. However, little is known about the contribution of population health and wellbeing to socially sustainable urban projects and the monetary value of the opportunity this presents to improve the urban environment for local communities. We report early findings from collaborations with two leading property companies managing major investments in brownfield urban assets in the UK to consider how the inclusion of health and wellbeing evidence in social valuation can inform perceptions of brownfield development social benefit for asset managers, local communities, public authorities and investors for the benefit of all parties. Using holistic case studies and systems mapping approaches, we explore complex relationships between public health considerations and asset management decisions in urban production. Findings indicate a strong real estate investment industry appetite and potential to include health as a vital component of sustainable real estate social value creation in asset management strategies.

Keywords: brownfield urban assets, health and wellbeing, social needs and impact, social valuation, sustainable real estate, systems mapping

Procedia PDF Downloads 69
573 A Systematic Review of Antimicrobial Resistance in Fish and Poultry – Health and Environmental Implications for Animal Source Food Production in Egypt, Nigeria, and South Africa

Authors: Ekemini M. Okon, Reuben C. Okocha, Babatunde T. Adesina, Judith O. Ehigie, Babatunde M. Falana, Boluwape T. Okikiola

Abstract:

Antimicrobial resistance (AMR) has evolved to become a significant threat to global public health and food safety. The development of AMR in animals has been associated with antimicrobial overuse. In recent years, the number of antimicrobials used in food animals such as fish and poultry has escalated. It, therefore, becomes imperative to understand the patterns of AMR in fish and poultry and map out future directions for better surveillance efforts. This study used the Preferred Reporting Items for Systematic reviews and Meta-Analyses(PRISMA) to assess the trend, patterns, and spatial distribution for AMR research in Egypt, Nigeria, and South Africa. A literature search was conducted through the Scopus and Web of Science databases in which published studies on AMR between 1989 and 2021 were assessed. A total of 172 articles were relevant for this study. The result showed progressive attention on AMR studies in fish and poultry from 2018 to 2021 across the selected countries. The period between 2018 (23 studies) and 2021 (25 studies) showed a significant increase in AMR publications with a peak in 2019 (28 studies). Egypt was the leading exponent of AMR research (43%, n=74) followed by Nigeria (40%, n=69), then South Africa (17%, n=29). AMR studies in fish received relatively little attention across countries. The majority of the AMR studies were on poultry in Egypt (82%, n=61), Nigeria (87%, n=60), and South Africa (83%, n=24). Further, most of the studies were on Escherichia and Salmonella species. Antimicrobials frequently researched were ampicillin, erythromycin, tetracycline, trimethoprim, chloramphenicol, and sulfamethoxazole groups. Multiple drug resistance was prevalent, as demonstrated by antimicrobial resistance patterns. In poultry, Escherichia coli isolates were resistant to cefotaxime, streptomycin, chloramphenicol, enrofloxacin, gentamycin, ciprofloxacin, oxytetracycline, kanamycin, nalidixic acid, tetracycline, trimethoprim/sulphamethoxazole, erythromycin, and ampicillin. Salmonella enterica serovars were resistant to tetracycline, trimethoprim/sulphamethoxazole, cefotaxime, and ampicillin. Staphylococcusaureus showed high-level resistance to streptomycin, kanamycin, erythromycin, cefoxitin, trimethoprim, vancomycin, ampicillin, and tetracycline. Campylobacter isolates were resistant to ceftriaxone, erythromycin, ciprofloxacin, tetracycline, and nalidixic acid at varying degrees. In fish, Enterococcus isolates showed resistance to penicillin, ampicillin, chloramphenicol, vancomycin, and tetracycline but sensitive to ciprofloxacin, erythromycin, and rifampicin. Isolated strains of Vibrio species showed sensitivity to florfenicol and ciprofloxacin, butresistance to trimethoprim/sulphamethoxazole and erythromycin. Isolates of Aeromonas and Pseudomonas species exhibited resistance to ampicillin and amoxicillin. Specifically, Aeromonashydrophila isolates showed sensitivity to cephradine, doxycycline, erythromycin, and florfenicol. However, resistance was also exhibited against augmentinandtetracycline. The findings constitute public and environmental health threats and suggest the need to promote and advance AMR research in other countries, particularly those on the global hotspot for antimicrobial use.

Keywords: antibiotics, antimicrobial resistance, bacteria, environment, public health

Procedia PDF Downloads 199
572 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 109
571 Social Ties and the Prevalence of Single Chronic Morbidity and Multimorbidity among the Elderly Population in Selected States of India

Authors: Sree Sanyal

Abstract:

Research in ageing often highlights the age-related health dimension more than the psycho-social characteristics of the elderly, which also influences and challenges the health outcomes. Multimorbidity is defined as the person having more than one chronic non-communicable diseases and their prevalence increases with ageing. The study aims to evaluate the influence of social ties on self-reported prevalence of multimorbidity (selected chronic non-communicable diseases) among the selected states of elderly population in India. The data is accessed from Building Knowledge Base on Population Ageing in India (BKPAI), collected in 2011 covering the self-reported chronic non-communicable diseases like arthritis, heart disease, diabetes, lung disease with asthma, hypertension, cataract, depression, dementia, Alzheimer’s disease, and cancer. The data of the above diseases were taken together and categorized as: ‘no disease’, ‘one disease’ and ‘multimorbidity’. The predicted variables were demographic, socio-economic, residential types, and the variable of social ties includes social support, social engagement, perceived support, connectedness, and importance of the elderly. Predicted probability for multiple logistic regression was used to determine the background characteristics of the old in association with chronic morbidities showing multimorbidity. The finding suggests that 24.35% of the elderly are suffering from multimorbidity. Research shows that with reference to ‘no disease’, according to the socio-economic characteristics of the old, the female oldest old (80+) from others in caste and religion, widowed, never had any formal education, ever worked in their life, coming from the second wealth quintile standard, from rural Maharashtra are more prone with ‘one disease’. From the social ties background, the elderly who perceives they are important to the family, after getting older their decision-making status has been changed, prefer to stay with son and spouse only, satisfied with the communication from their children are more likely to have less single morbidity and the results are significant. Again, with respect to ‘no disease’, the female oldest old (80+), who are others in caste, Christian in religion, widowed, having less than 5 years of education completed, ever worked, from highest wealth quintile, residing in urban Kerala are more associated with multimorbidity. The elderly population who are more socially connected through family visits, public gatherings, gets support in decision making, who prefers to spend their later years with son and spouse only but stays alone shows lesser prevalence of multimorbidity. In conclusion, received and perceived social integration and support from associated neighborhood in the older days, knowing about their own needs in life facilitates better health and wellbeing of the elderly population in selected states of India.

Keywords: morbidity, multi-morbidity, prevalence, social ties

Procedia PDF Downloads 121
570 Development of a Quick On-Site Pass/Fail Test for the Evaluation of Fresh Concrete Destined for Application as Exposed Concrete

Authors: Laura Kupers, Julie Piérard, Niki Cauberg

Abstract:

The use of exposed concrete (sometimes referred to as architectural concrete), keeps gaining popularity. Exposed concrete has the advantage to combine the structural properties of concrete with an aesthetic finish. However, for a successful aesthetic finish, much attention needs to be paid to the execution (formwork, release agent, curing, weather conditions…), the concrete composition (choice of the raw materials and mix proportions) as well as to its fresh properties. For the latter, a simple on-site pass/fail test could halt the casting of concrete not suitable for architectural concrete and thus avoid expensive repairs later. When architects opt for an exposed concrete, they usually want a smooth, uniform and nearly blemish-free surface. For this choice, a standard ‘construction’ concrete does not suffice. An aesthetic surface finishing requires the concrete to contain a minimum content of fines to minimize the risk of segregation and to allow complete filling of more complex shaped formworks. The concrete may neither be too viscous as this makes it more difficult to compact and it increases the risk of blow holes blemishing the surface. On the other hand, too much bleeding may cause color differences on the concrete surface. An easy pass/fail test, which can be performed on the site just before the casting, could avoid these problems. In case the fresh concrete fails the test, the concrete can be rejected. Only in case the fresh concrete passes the test, the concrete would be cast. The pass/fail tests are intended for a concrete with a consistency class S4. Five tests were selected as possible onsite pass/fail test. Two of these tests already exist: the K-slump test (ASTM C1362) and the Bauer Filter Press Test. The remaining three tests were developed by the BBRI in order to test the segregation resistance of fresh concrete on site: the ‘dynamic sieve stability test’, the ‘inverted cone test’ and an adapted ‘visual stability index’ (VSI) for the slump and flow test. These tests were inspired by existing tests for self-compacting concrete, for which the segregation resistance is of great importance. The suitability of the fresh concrete mixtures was also tested by means of a laboratory reference test (resistance to segregation) and by visual inspection (blow holes, structure…) of small test walls. More than fifteen concrete mixtures of different quality were tested. The results of the pass/fail tests were compared with the results of this laboratory reference test and the test walls. The preliminary laboratory results indicate that concrete mixtures ‘suitable’ for placing as exposed concrete (containing sufficient fines, a balanced grading curve etc.) can be distinguished from ‘inferior’ concrete mixtures. Additional laboratory tests, as well as tests on site, will be conducted to confirm these preliminary results and to set appropriate pass/fail values.

Keywords: exposed concrete, testing fresh concrete, segregation resistance, bleeding, consistency

Procedia PDF Downloads 423
569 Evaluation of Cooperative Hand Movement Capacity in Stroke Patients Using the Cooperative Activity Stroke Assessment

Authors: F. A. Thomas, M. Schrafl-Altermatt, R. Treier, S. Kaufmann

Abstract:

Stroke is the main cause of adult disability. Especially upper limb function is affected in most patients. Recently, cooperative hand movements have been shown to be a promising type of upper limb training in stroke rehabilitation. In these movements, which are frequently found in activities of daily living (e.g. opening a bottle, winding up a blind), the force of one upper limb has to be equally counteracted by the other limb to successfully accomplish a task. The use of standardized and reliable clinical assessments is essential to evaluate the efficacy of therapy and the functional outcome of a patient. Many assessments for upper limb function or impairment are available. However, the evaluation of cooperative hand movement tasks are rarely included in those. Thus, the aim of this study was (i) to develop a novel clinical assessment (CASA - Cooperative Activity Stroke Assessment) for the evaluation of patients’ capacity to perform cooperative hand movements and (ii) to test its inter- and interrater reliability. Furthermore, CASA scores were compared to current gold standard assessments for upper extremity in stroke patients (i.e. Fugl-Meyer Assessment, Box & Blocks Test). The CASA consists of five cooperative activities of daily living including (1) opening a jar, (2) opening a bottle, (3) open and closing of a zip, (4) unscrew a nut and (5) opening a clipbox. Here, the goal is to accomplish the tasks as fast as possible. In addition to the quantitative rating (i.e. time) which is converted to a 7-point scale, also the quality of the movement is rated in a 4-point scale. To test the reliability of CASA, fifteen stroke subjects were tested within a week twice by the same two raters. Intra-and interrater reliability was calculated using the intraclass correlation coefficient (ICC) for total CASA score and single items. Furthermore, Pearson-correlation was used to compare the CASA scores to the scores of Fugl-Meyer upper limb assessment and the box and blocks test, which were assessed in every patient additionally to the CASA. ICC scores of the total CASA score indicated an excellent- and single items established a good to excellent inter- and interrater reliability. Furthermore, the CASA score was significantly correlated to the Fugl-Meyer and Box & Blocks score. The CASA provides a reliable assessment for cooperative hand movements which are crucial for many activities of daily living. Due to its non-costly setup, easy and fast implementation, we suggest it to be well suitable for clinical application. In conclusion, the CASA is a useful tool in assessing the functional status and therapy related recovery in cooperative hand movement capacity in stroke patients.

Keywords: activitites of daily living, clinical assessment, cooperative hand movements, reliability, stroke

Procedia PDF Downloads 319
568 Bituminous Geomembranes: Sustainable Products for Road Construction and Maintenance

Authors: Ines Antunes, Andrea Massari, Concetta Bartucca

Abstract:

Greenhouse gasses (GHG) role in the atmosphere has been well known since the 19th century; however, researchers have begun to relate them to climate changes only in the second half of the following century. From this moment, scientists started to correlate the presence of GHG such as CO₂ with the global warming phenomena. This has raised the awareness not only of those who were experts in this field but also of public opinion, which is becoming more and more sensitive to environmental pollution and sustainability issues. Nowadays the reduction of GHG emissions is one of the principal objectives of EU nations. The target is an 80% reduction of emissions in 2050 and to reach the important goal of carbon neutrality. Road sector is responsible for an important amount of those emissions (about 20%). The most part is due to traffic, but a good contribution is also given directly or indirectly from road construction and maintenance. Raw material choice and reuse of post-consumer plastic rather than a cleverer design of roads have an important contribution to reducing carbon footprint. Bituminous membranes can be successfully used as reinforcement systems in asphalt layers to improve road pavement performance against cracking. Composite materials coupling membranes with grids and/or fabrics should be able to combine improved tensile properties of the reinforcement with stress absorbing and waterproofing effects of membranes. Polyglass, with its brand dedicated to road construction and maintenance called Polystrada, has done more than this. The company's target was not only to focus sustainability on the final application but also to implement a greener mentality from the cradle to the grave. Starting from production, Polyglass has made important improvements finalized to increase efficiency and minimize waste. The installation of a trigeneration plant and the usage of selected production scraps inside the products as well as the reduction of emissions into the environment, are one of the main efforts of the company to reduce impact during final product build-up. Moreover, the benefit given by installing Polystrada products brings a significant improvement in road lifetime. This has an impact not only on the number of maintenance or renewal that needs to be done (build less) but also on traffic density due to works and road deviation in case of operations. During the end of the life of a road, Polystrada products can be 100% recycled and milled with classical systems used without changing the normal maintenance procedures. In this work, all these contributions were quantified in terms of CO₂ emission thanks to an LCA analysis. The data obtained were compared with a classical system or a standard production of a membrane. What it is possible to see is that the usage of Polyglass products for street maintenance and building gives a significant reduction of emissions in case of membrane installation under the road wearing course.

Keywords: CO₂ emission, LCA, maintenance, sustainability

Procedia PDF Downloads 65
567 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 123
566 STML: Service Type-Checking Markup Language for Services of Web Components

Authors: Saqib Rasool, Adnan N. Mian

Abstract:

Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.

Keywords: REST, STML, type checking, web component

Procedia PDF Downloads 254
565 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 160
564 Social Problems and Gender Wage Gap Faced by Working Women in Readymade Garment Sector of Pakistan

Authors: Narjis Kahtoon

Abstract:

The issue of the wage discrimination on the basis of gender and social problem has been a significant research problem for several decades. Whereas lots of have explored reasons for the persistence of an inequality in the wages of male and female, none has successfully explained away the entire differentiation. The wage discrimination on the basis of gender and social problem of working women is a global issue. Although inequality in political and economic and social make-up of countries all over the world, the gender wage discrimination, and social constraint is present. The aim of the research is to examine the gender wage discrimination and social constraint from an international perspective and to determine whether any pattern exists among cultural dimensions of a country and the man and women remuneration gap in Readymade Garment Sector of Pakistan. Population growth rate is significant indicator used to explain the change in population and play a crucial point in the economic development of a country. In Pakistan, readymade garment sector consists of small, medium and large sized firms. With an estimated 30 percent of the workforce in textile- Garment is females’. Readymade garment industry is a labor intensive industry and relies on the skills of individual workers and provides highest value addition in the textile sector. In the Garment sector, female workers are concentrated in poorly paid, labor-intensive down-stream production (readymade garments, linen, towels, etc.), while male workers dominate capital- intensive (ginning, spinning and weaving) processes. Gender wage discrimination and social constraint are reality in Pakistan Labor Market. This research allows us not only to properly detect the size of gender wage discrimination and social constraint but to also fully understand its consequences in readymade garment sector of Pakistan. Furthermore, research will evaluated this measure for the three main clusters like Lahore, Karachi, and Faisalabad. These data contain complete details of male and female workers and supervisors in the readymade garment sector of Pakistan. These sources of information provide a unique opportunity to reanalyze the previous finding in the literature. The regression analysis focused on the standard 'Mincerian' earning equation and estimates it separately by gender, the research will also imply the cultural dimensions developed by Hofstede (2001) to profile a country’s cultural status and compare those cultural dimensions to the wage inequalities. Readymade garment of Pakistan is one of the important sectors since its products have huge demand at home and abroad. These researches will a major influence on the measures undertaken to design a public policy regarding wage discrimination and social constraint in readymade garment sector of Pakistan.

Keywords: gender wage differentials, decomposition, garment, cultural

Procedia PDF Downloads 209
563 Living with Functional Movement Disorder: An Exploratory Study of the Lived Experience of Five Individuals with Functional Movement Disorder

Authors: Stephanie Zuba-Bates

Abstract:

Purpose: This qualitative research study explored the lived experience of people with functional movement disorder (FMD) including how it impacts their quality of life and participation in life activities. It aims to educate health care professionals about FMD from the perspective of those living with the disorder. Background: Functional movement disorder is characterized by abnormal motor movements including tremors, abnormal gait, paresis, and dystonia with no known underlying pathophysiological cause. Current research estimates that FMD may account for 2-20% of clients seen by neurologists. Getting a diagnosis of FMD is typically long and difficult. In addition, many healthcare professionals are unfamiliar with the disorder which may delay treatment. People living with FMD face great disruption in major areas of life including activities of daily living (ADLs), work, leisure, and community participation. OT practitioners have expertise in working with people with both physical disabilities as well as mental illness and this expertise has the potential to guide treatment and become part of the standard of care. In order for occupational therapists to provide these services, they must be aware of the disorder and must advocate for clients to be referred to OT services. In addition, referring physicians and other health professionals need to understand how having FMD impacts the daily functioning of people living with the disorder and how OT services can intervene to improve their quality of life. This study aimed to answer the following research questions: 1) What is the lived experience of individuals with FMD?; 2) How has FMD impacted their participation in major areas of life?; and, 3) What treatment have they found to be effective in improving their quality of life? Method: A naturalistic approach was used to collect qualitative data through semi-structured telephone interviews of five individuals living with FMD. Subjects were recruited from social media websites and resources for people with FMD. Data was analyzed for common themes among participants. Results: Common themes including the variability of symptoms of the disorder; challenges to receiving a diagnosis; frustrations with and distrust of health care professionals; the impact of FMD on the participant’s ability to perform daily activities; and, strategies for living with the symptoms of FMD. Conclusion: All of the participants in the study had to modify their daily activities, roles and routines as a result of the disorder. This is an area where occupational therapists may intervene to improve the quality of life of these individuals. Additionally, participants reported frustration with the medical community regarding the awareness of the disorder and how they were treated by medical professionals. Much more research and awareness of the disorder is in order.

Keywords: functional movement disorder, occupational therapy, participation, quality of life

Procedia PDF Downloads 168
562 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition

Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can

Abstract:

To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.

Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning

Procedia PDF Downloads 85
561 The Taiwan Environmental Impact Assessment Act Contributes to the Water Resources Saving

Authors: Feng-Ming Fan, Xiu-Hui Wen

Abstract:

Shortage of water resources is a crucial problem to be solved in Taiwan. However, lack of effective and mandatory regulation on water recovery and recycling leads to no effective water resource controls currently. Although existing legislation sets standards regarding water recovery, implementation and enforcement of legislation are facing challenges. In order to break through the dilemma, this study aims to find enforcement tools, improve inspection skills, develop an inspection system, to achieve sustainable development of precious water resources. The Taiwan Environmental Impact Assessment Act (EIA Act) was announced on 1994. The aim of EIA Act is to protect the environment by preventing and mitigating the adverse impact of development activity on the environment. During the EIA process, we can set standards that require enterprises to reach a certain percentage of water recycling based on different case characteristics, to promote sewage source reduction and water saving benefits. Next, we have to inspect how the enterprises handle their waste water and perform water recovery based on environmental assessment commitments, for the purpose of reviewing and measuring the implementation efficiency of water recycling and reuse, an eco-friendly measure. We invited leading experts in related fields to provide lecture on water recycling, strengthen law enforcement officials’ inspection knowledge, and write inspection reference manual to be used as basis of enforcement. Then we finalized the manual by reaching mutual agreement between the experts and relevant agencies. We then inspected 65 high-tech companies whose daily water consumption is over 1,000 tons individually, located at 3 science parks, set up by Ministry of Science and Technology. Great achievement on water recycling was achieved at an amount of 400 million tons per year, equivalent to 2.5 months water usage for general public in Taiwan. The amount is equal to 710 billion bottles of 600 ml cola, 170 thousand international standard swimming pools of 2,500 tons, irrigation water applied to 40 thousand hectares of rice fields, or 1.7 Taipei Feitsui Reservoir of reservoir storage. This study demonstrated promoting effects of environmental impact assessment commitments on water recycling, and therefore water resource sustainable development. It also confirms the value of EIA Act for environmental protection. Economic development should go hand in hand with environmental protection, and it’s a mainstream. It clearly shows the EIA regulation can minimize harmful effects caused by development activity to the environment, as well as pursuit water resources sustainable development.

Keywords: the environmental impact assessment act, water recycling environmental assessment commitment, water resource sustainable development, water recycling, water reuse

Procedia PDF Downloads 247
560 Converting Urban Organic Waste into Aquaculture Feeds: A Two-Step Bioconversion Approach

Authors: Aditi Chitharanjan Parmar, Marco Gottardo, Giulia Adele Tuci, Francesco Valentino

Abstract:

The generation of urban organic waste is a significant environmental problem due to the potential release of leachate and/or methane into the environment. This contributes to climate change, discharging a valuable resource that could be used in various ways. This research addresses this issue by proposing a two-step approach by linking biowaste management to aquaculture industry via single cell proteins (SCP) production. A mixture of food waste and municipal sewage sludge (FW-MSS) was firstly subjected to a mesophilic (37°C) anaerobic fermentation to produce a liquid stream rich in short-chain fatty acids (SCFAs), which are important building blocks for the following microbial biomass growth. In the frame of stable fermentation activity (after 1 week of operation), the average value of SCFAs was 21.3  0.4 g COD/L, with a CODSCFA/CODSOL ratio of 0.77 COD/COD. This indicated the successful strategy to accumulate SCFAs from the biowaste mixture by applying short hydraulic retention time (HRT; 4 days) and medium organic loading rate (OLR; 7 – 12 g VS/L d) in the lab-scale (V = 4 L) continuous stirred tank reactor (CSTR). The SCFA-rich effluent was then utilized as feedstock for the growth of a mixed microbial consortium able to store polyhydroxyalkanoates (PHA), a class of biopolymers completely biodegradable in nature and produced as intracellular carbon/energy source. Given the demonstrated properties of the intracellular PHA as antimicrobial and immunomodulatory effect on various fish species, the PHA-producing culture was intended to be utilized as SCP in aquaculture. The growth of PHA-storing biomass was obtained in a 2-L sequencing batch reactor (SBR), fully aerobic and set at 25°C; to stimulate a certain storage response (PHA production) in the cells, the feast-famine conditions were adopted, consisting in an alternation of cycles during which the biomass was exposed to an initial abundance of substrate (feast phase) followed by a starvation period (famine phase). To avoid the proliferation of other bacteria not able to store PHA, the SBR was maintained at low HRT (2 days). Along the stable growth of the mixed microbial consortium (the growth yield was estimated to be 0.47 COD/COD), the feast-famine strategy enhanced the PHA production capacity, leading to a final PHA content in the biomass equal to 16.5 wt%, which is suitable for the use as SCP. In fact, by incorporating the waste-derived PHA-rich biomass into fish feed at 20 wt%, the final feed could contain a PHA content around 3.0 wt%, within the recommended range (0.2–5.0 wt%) for promoting fish health. Proximate analysis of the PHA-rich biomass revealed a good crude proteins level (around 51 wt%) and the presence of all the essential amino acids (EAA), together accounting for 31% of the SCP total amino acid composition. This suggested that the waste-derived SCP was a source of good quality proteins with a good nutritional value. This approach offers a sustainable solution for urban waste management, potentially establishing a sustainable waste-to-value conversion route by connecting waste management to the growing aquaculture and fish feed production sectors.

Keywords: feed supplement, nutritional value, polyhydroxyalkanoates (PHA), single cell protein (SCP), urban organic waste.

Procedia PDF Downloads 42
559 Increasing Adherence to Preventative Care Bundles for Healthcare-Associated Infections: The Impact of Nurse Education

Authors: Lauren G. Coggins

Abstract:

Catheter-associated urinary tract infections (CAUTI) and central line-associated bloodstream infections (CLABSI) are among the most common healthcare-associated infections (HAI), contributing to prolonged lengths of stay, greater costs of patient care, and increased patient mortality. Evidence-based preventative care bundles exist to establish consistent, safe patient-care practices throughout an entire organization, helping to ensure the collective application of care strategies that aim to improve patient outcomes and minimize complications. The cardiac intensive care unit at a nationally ranked teaching and research hospital in the United States exceeded its annual CAUTI and CLABSI targets in the fiscal year 2019, prompting examination into the unit’s infection prevention efforts that included preventative care bundles for both HAIs. Adherence to the CAUTI and CLABSI preventative care bundles was evaluated through frequent audits conducted over three months, using standards and resources from The Joint Commission, a globally recognized leader in quality improvement in healthcare and patient care safety. The bundle elements with the lowest scores were identified as the most commonly missed elements. Three elements from both bundles, six elements in total, served as key content areas for the educational interventions targeted to bedside nurses. The CAUTI elements included appropriate urinary catheter order, appropriate continuation criteria, and urinary catheter care. The CLABSI elements included primary tubing compliance, needleless connector compliance, and dressing change compliance. An integrated, multi-platform education campaign featured content on each CAUTI and CLABSI preventative care bundle in its entirety, with additional reinforcement focused on the lowest scoring elements. One-on-one educational materials included an informational pamphlet, badge buddy, a presentation to reinforce nursing care standards, and real-time application through case studies and electronic health record demonstrations. A digital hub was developed on the hospital’s Intranet for quick access to unit resources, and a bulletin board helped track the number of days since the last CAUTI and CLABSI incident. Audits continued to be conducted throughout the education campaign, and staff were given real-time feedback to address any gaps in adherence. Nearly every nurse in the cardiac intensive care unit received all educational materials, and adherence to all six key bundle elements increased after the implementation of educational interventions. Recommendations from this implementation include providing consistent, comprehensive education across multiple teaching tools and regular audits to track adherence. The multi-platform education campaign brought focus to the evidence-based CAUTI and CLABSI bundles, which in turn will help to reduce CAUTI and CLABSI rates in clinical practice.

Keywords: education, healthcare-associated infections, infection, nursing, prevention

Procedia PDF Downloads 116
558 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment

Authors: Rouzbeh Jafari, Joe Nava

Abstract:

This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.

Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy

Procedia PDF Downloads 110
557 Effect of Fresh Concrete Curing Methods on Its Compressive Strength

Authors: Xianghe Dai, Dennis Lam, Therese Sheehan, Naveed Rehman, Jie Yang

Abstract:

Concrete is one of the most used construction materials that may be made onsite as fresh concrete and then placed in formwork to produce the desired shapes of structures. It has been recognized that the raw materials and mix proportion of concrete dominate the mechanical characteristics of hardened concrete, and the curing method and environment applied to the concrete in early stages of hardening will significantly influence the concrete properties, such as compressive strength, durability, permeability etc. In construction practice, there are various curing methods to maintain the presence of mixing water throughout the early stages of concrete hardening. They are also beneficial to concrete in hot weather conditions as they provide cooling and prevent the evaporation of water. Such methods include ponding or immersion, spraying or fogging, saturated wet covering etc. Also there are various curing methods that may be implemented to decrease the level of water lost which belongs to the concrete surface, such as putting a layer of impervious paper, plastic sheeting or membrane on the concrete to cover it. In the concrete material laboratory, accelerated strength gain methods supply the concrete with heat and additional moisture by applying live steam, coils that are subject to heating or pads that have been warmed electrically. Currently when determining the mechanical parameters of a concrete, the concrete is usually sampled from fresh concrete on site and then cured and tested in laboratories where standardized curing procedures are adopted. However, in engineering practice, curing procedures in the construction sites after the placing of concrete might be very different from the laboratory criteria, and this includes some standard curing procedures adopted in the laboratory that can’t be applied on site. Sometimes the contractor compromises the curing methods in order to reduce construction costs etc. Obviously the difference between curing procedures adopted in the laboratory and those used on construction sites might over- or under-estimate the real concrete quality. This paper presents the effect of three typical curing methods (air curing, water immersion curing, plastic film curing) and of maintaining concrete in steel moulds on the compressive strength development of normal concrete. In this study, Portland cement with 30% fly ash was used and different curing periods, 7 days, 28 days and 60 days were applied. It was found that the highest compressive strength was observed from concrete samples to which 7-day water immersion curing was applied and from samples maintained in steel moulds up to the testing date. The research results implied that concrete used as infill in steel tubular members might develop a higher strength than predicted by design assumptions based on air curing methods. Wrapping concrete with plastic film as a curing method might delay the concrete strength development in the early stages. Water immersion curing for 7 days might significantly increase the concrete compressive strength.

Keywords: compressive strength, air curing, water immersion curing, plastic film curing, maintaining in steel mould, comparison

Procedia PDF Downloads 293
556 Adding a Degree of Freedom to Opinion Dynamics Models

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 119
555 Preliminary Result on the Impact of Anthropogenic Noise on Understory Bird Population in Primary Forest of Gaya Island

Authors: Emily A. Gilbert, Jephte Sompud, Andy R. Mojiol, Cynthia B. Sompud, Alim Biun

Abstract:

Gaya Island of Sabah is known for its wildlife and marine biodiversity. It has marks itself as one of the hot destinations of tourists from all around the world. Gaya Island tourism activities have contributed to Sabah’s economy revenue with the high number of tourists visiting the island. However, it has led to the increased anthropogenic noise derived from tourism activities. This may greatly interfere with the animals such as understory birds that rely on acoustic signals as a tool for communication. Many studies in other parts of the regions reveal that anthropogenic noise does decrease species richness of avian community. However, in Malaysia, published research regarding the impact of anthropogenic noise on the understory birds is still very lacking. This study was conducted in order to fill up this gap. This study aims to investigate the anthropogenic noise’s impact towards understory bird population. There were three sites within the Primary forest of Gaya Island that were chosen to sample the level of anthropogenic noise in relation to the understory bird population. Noise mapping method was used to measure the anthropogenic noise level and identify the zone with high anthropogenic noise level (> 60dB) and zone with low anthropogenic noise level (< 60dB) based on the standard threshold of noise level. The methods that were used for this study was solely mist netting and ring banding. This method was chosen as it can determine the diversity of the understory bird population in Gaya Island. The preliminary study was conducted from 15th to 26th April and 5th to 10th May 2015 whereby there were 2 mist nets that were set up at each of the zones within the selected site. The data was analyzed by using the descriptive analysis, presence and absence analysis, diversity indices and diversity t-test. Meanwhile, PAST software was used to analyze the obtain data. The results from this study present a total of 60 individuals that consisted of 12 species from 7 families of understory birds were recorded in three of the sites in Gaya Island. The Shannon-Wiener index shows that diversity of species in high anthropogenic noise zone and low anthropogenic noise zone were 1.573 and 2.009, respectively. However, the statistical analysis shows that there was no significant difference between these zones. Nevertheless, based on the presence and absence analysis, it shows that the species at the low anthropogenic noise zone was higher as compared to the high anthropogenic noise zone. Thus, this result indicates that there is an impact of anthropogenic noise on the population diversity of understory birds. There is still an urgent need to conduct an in-depth study by increasing the sample size in the selected sites in order to fully understand the impact of anthropogenic noise towards the understory birds population so that it can then be in cooperated into the wildlife management for a sustainable environment in Gaya Island.

Keywords: anthropogenic noise, biodiversity, Gaya Island, understory bird

Procedia PDF Downloads 365
554 Solar-Electric Pump-out Boat Technology: Impacts on the Marine Environment, Public Health, and Climate Change

Authors: Joy Chiu, Colin Hemez, Emma Ryan, Jia Sun, Robert Dubrow, Michael Pascucilla

Abstract:

The popularity of recreational boating is on the rise in the United States, which raises numerous national-level challenges in the management of air and water pollution, aquatic habitat destruction, and waterway access. The need to control sewage discharge from recreational vessels underlies all of these challenges. The release of raw human waste into aquatic environments can lead to eutrophication and algal blooms; can increase human exposure to pathogenic viruses, bacteria, and parasites; can financially impact commercial shellfish harvest/fisheries and marine bathing areas; and can negatively affect access to recreational and/or commercial waterways to the detriment of local economies. Because of the damage that unregulated sewage discharge can do to environments and human health/marine life, recreational vessels in the United States are required by law to 'pump-out' sewage from their holding tanks into sewage treatment systems in all designated 'no discharge areas'. Many pump-out boats, which transfer waste out of recreational vessels, are operated and maintained using funds allocated through the Federal Clean Vessel Act (CVA). The East Shore District Health Department of Branford, Connecticut is protecting this estuary by pioneering the design and construction of the first-in-the-nation zero-emissions, the solar-electric pump-out boat of its size to replace one of its older traditional gasoline-powered models through a Connecticut Department of Energy and Environmental Protection CVA Grant. This study, conducted in collaboration with the East Shore District Health Department, the Connecticut Department of Energy and Environmental Protection, States Organization for Boating Access and Connecticut’s CVA program coordinators, had two aims: (1) To perform a national assessment of pump-out boat programs, supplemented by a limited international assessment, to establish best pump-out boat practices (regardless of how the boat is powered); and (2) to estimate the cost, greenhouse gas emissions, and environmental and public health impacts of solar-electric versus traditional gasoline-powered pump-out boats. A national survey was conducted of all CVA-funded pump-out program managers and selected pump-out boat operators to gauge best practices; costs associated with gasoline-powered pump-out boat operation and management; and the regional, cultural, and policy-related issues that might arise from the adoption of solar-electric pump-out boat technology. We also conducted life-cycle analyses of gasoline-powered and solar-electric pump-out boats to compare their greenhouse gas emissions; production of air, soil and water pollution; and impacts on human health. This work comprises the most comprehensive study into pump-out boating practices in the United States to date, in which information obtained at local, state, national, and international levels is synthesized. This study aims to enable CVA programs to make informed recommendations for sustainable pump-out boating practices and identifies the challenges and opportunities that remain for the wide adoption of solar-electric pump-out boat technology.

Keywords: pump-out boat, marine water, solar-electric, zero emissions

Procedia PDF Downloads 128
553 Bacteriological Spectrum and Resistance Patterns of Common Clinical Isolates from Infections in Cancer Patients

Authors: Vivek Bhat, Rohini Kelkar, Sanjay Biswas

Abstract:

Introduction: Cancer patients are at increased risk of bacterial infections. This may due to the disease process itself, the effect of chemotherapeutic drugs or invasive procedures such as catheterization. A wide variety of bacteria including some emerging pathogens are increasingly being reported from these patients. The incidence of multidrug-resistant organisms particularly in the Gram negative group is also increasing, with higher resistance rates seen to cephalosporins, β-lactam/β-lactam inhibitor combinations, and the carbapenems. This study documents the bacteriological spectrum of infections and their resistance patterns in cancer patients. Methods: This study includes all bacterial isolates recovered from infections cancer patients over a period of 18 months. Samples included Blood cultures, Pus/wound swabs, urine, tissue biopsies, body fluids, catheter tips and respiratory specimens such as sputum and bronchoalveolar lavage (BAL). All samples were processed in the microbiology laboratory as per standard laboratory protocols. Organisms were identified to species level and antimicrobial susceptibility testing was performed manually by the disc diffusion technique or in the Vitek-2 (Biomereux, France) instrument. Interpretations were as per Clinical laboratory Standards Institute (CLSI) guidelines. Results: A total of 1150 bacterial isolates were cultured from 884 test samples during the study period. Of these 227 were Gram-positive and 923 were Gram-negative organisms. Staphylococcus aureus (99 isolates) was the commonest Gram-positive isolate followed by Enterococcus (79) and Gr A Streptococcus (30). Among the Gram negatives, E. coli (304), Pseudomonas aeruginosa (201) and Klebsiella pneumoniae (190) were the most common. Of the Staphylococcus aureus isolates 27.2% were methicillin resistant. Only 5.06% enterococci were vancomycin resistant. High rates of resistance to cefotaxime and ciprofloxacin were seen amongst E. coli (84.8% & 83.55%) and Klebsiella pneumoniae (71 & 62.1%) respectively. Resistance to carbapenems (meropenem) was high at 70% in Acinetobacter spp.; however all isolates were sensitive to colistin. Among the aminoglycosides, amikacin retained good efficacy against Escherichia coli (82.9%) and Pseudomonas aeruginosa (78.1%). Occasional isolates of emerging pathogens such as Chryseobacterium indologens, Roseomonas, and Achromobacter xyloxidans were also recovered. Conclusion: The common infections in cancer patients include respiratory, wound, tract infections and sepsis. The commonest isolates include Staphylococcus aureus, Enterococci, Escherichia coli, Klebsiella pneumoniae and Pseudomonas aeruginosa. There is a high level of resistance to the commonly used antibiotics among Gram-negative organisms.

Keywords: bacteria, resistance, infection, cancer

Procedia PDF Downloads 299
552 Production of Medicinal Bio-active Amino Acid Gamma-Aminobutyric Acid In Dairy Sludge Medium

Authors: Farideh Tabatabaee Yazdi, Fereshteh Falah, Alireza Vasiee

Abstract:

Introduction: Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is widely present in organisms. GABA is a kind of pharmacological and biological component and its application is wide and useful. Several important physiological functions of GABA have been characterized, such as neurotransmission and induction of hypotension. GABA is also a strong secretagogue of insulin from the pancreas and effectively inhibits small airway-derived lung adenocarcinoma and tranquilizer. Many microorganisms can produce GABA, and lactic acid bacteria have been a focus of research in recent years because lactic acid bacteria possess special physiological activities and are generally regarded as safe. Among them, the Lb. Brevis produced the highest amount of GABA. The major factors affecting GABA production have been characterized, including carbon sources and glutamate concentration. The use of food industry waste to produce valuable products such as amino acids seems to be a good way to reduce production costs and prevent the waste of food resources. In a dairy factory, a high volume of sludge is produced from a separator that contains useful compounds such as growth factors, carbon, nitrogen, and organic matter that can be used by different microorganisms such as Lb.brevis as carbon and nitrogen sources. Therefore, it is a good source of GABA production. GABA is primarily formed by the irreversible α-decarboxylation reaction of L-glutamic acid or its salts, catalysed by the GAD enzyme. In the present study, this aim was achieved for the fast-growing of Lb.brevis and producing GABA, using the dairy industry sludge as a suitable growth medium. Lactobacillus Brevis strains obtained from Microbial Type Culture Collection (MTCC) were used as model strains. In order to prepare dairy sludge as a medium, sterilization should be done at 121 ° C for 15 minutes. Lb. Brevis was inoculated to the sludge media at pH=6 and incubated for 120 hours at 30 ° C. After fermentation, the supernatant solution is centrifuged and then, the GABA produced was analyzed by the Thin Layer chromatography (TLC) method qualitatively and by the high-performance liquid chromatography (HPLC) method quantitatively. By increasing the percentage of dairy sludge in the culture medium, the amount of GABA increased. Also, evaluated the growth of bacteria in this medium showed the positive effect of dairy sludge on the growth of Lb.brevis, which resulted in the production of more GABA. GABA-producing LAB offers the opportunity of developing naturally fermented health-oriented products. Although some GABA-producing LAB has been isolated to find strains suitable for different fermentations, further screening of various GABA-producing strains from LAB, especially high-yielding strains, is necessary. The production of lactic acid, bacterial gamma-aminobutyric acid, is safe and eco-friendly. The use of dairy industry waste causes enhanced environmental safety. Also provides the possibility of producing valuable compounds such as GABA. In general, dairy sludge is a suitable medium for the growth of Lactic Acid Bacteria and produce this amino acid that can reduce the final cost of it by providing carbon and nitrogen source.

Keywords: GABA, Lactobacillus, HPLC, dairy sludge

Procedia PDF Downloads 144
551 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending

Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim

Abstract:

Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.

Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions

Procedia PDF Downloads 104
550 The Value of Computerized Corpora in EFL Textbook Design: The Case of Modal Verbs

Authors: Lexi Li

Abstract:

This study aims to contribute to the field of how computer technology can be exploited to enhance EFL textbook design. Specifically, the study demonstrates how computerized native and learner corpora can be used to enhance modal verb treatment in EFL textbooks. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because the pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the “secondary school” section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was compared with the textbook corpus in terms of the use (distributional features, semantic functions, and co-occurring constructions) in order to examine the degree of influence of the textbook on learners’ use of modal verbs. Moreover, the learner corpus was analyzed for the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The results indicate discrepancies between the textbook presentation of modal verbs and authentic modal use in natural discourse in terms of distributions of frequencies, semantic functions, and co-occurring structures. Furthermore, there are consistent patterns of use between the learner corpus and the textbook corpus with respect to the three above-mentioned aspects, except could, will and must, partially confirming the correlation between the frequency effects and L2 grammar acquisition. Further analysis reveals that the exceptions are caused by both positive and negative L1 transfer, indicating that the frequency effects can be intercepted by L1 interference. Besides, error analysis revealed that could, would, should and must are the most difficult for Chinese learners due to both inter-linguistic and intra-linguistic interference. The discrepancies between the textbook corpus and the native corpus point to a need to adjust the presentation of modal verbs in the textbooks in terms of frequencies, different meanings, and verb-phrase structures. Along with the adjustment of modal verb treatment based on authentic use, it is important for textbook writers to take into consideration the L1 interference as well as learners’ difficulties in their use of modal verbs. The present study is a methodological showcase of the combination both native and learner corpora in the enhancement of EFL textbook language authenticity and appropriateness for learners.

Keywords: EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 124
549 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 105
548 Combination Therapies Targeting Apoptosis Pathways in Pediatric Acute Myeloid Leukemia (AML)

Authors: Ahlam Ali, Katrina Lappin, Jaine Blayney, Ken Mills

Abstract:

Leukaemia is the most frequently (30%) occurring type of paediatric cancer. Of these, approximately 80% are acute lymphoblastic leukaemia (ALL) with acute myeloid leukaemia (AML) cases making up the remaining 20% alongside other leukaemias. Unfortunately, children with AML do not have promising prognosis with only 60% surviving 5 years or longer. It has been highlighted recently the need for age-specific therapies for AML patients, with paediatric AML cases having a different mutational landscape compared with AML diagnosed in adult patients. Drug Repurposing is a recognized strategy in drug discovery and development where an already approved drug is used for diseases other than originally indicated. We aim to identify novel combination therapies with the promise of providing alternative more effective and less toxic induction therapy options. Our in-silico analysis highlighted ‘cell death and survival’ as an aberrant, potentially targetable pathway in paediatric AML patients. On this basis, 83 apoptotic inducing compounds were screened. A preliminary single agent screen was also performed to eliminate potentially toxic chemicals, then drugs were constructed into a pooled library with 10 drugs per well over 160 wells, with 45 possible pairs and 120 triples in each well. Seven cell lines were used during this study to represent the clonality of AML in paediatric patients (Kasumi-1, CMK, CMS, MV11-14, PL21, THP1, MOLM-13). Cytotoxicity was assessed up to 72 hours using CellTox™ Green reagent. Fluorescence readings were normalized to a DMSO control. Z-Score was assigned to each well based on the mean and standard deviation of all the data. Combinations with a Z-Score <2 were eliminated and the remaining wells were taken forward for further analysis. A well was considered ‘successful’ if each drug individually demonstrated a Z-Score <2, while the combination exhibited a Z-Score >2. Each of the ten compounds in one well (155) had minimal or no effect as single agents on cell viability however, a combination of two or more of the compounds resulted in a substantial increase in cell death, therefore the ten compounds were de-convoluted to identify a possible synergistic pair/triple combinations. The screen identified two possible ‘novel’ drug pairing, with BCL2 inhibitor ABT-737, combined with either a CDK inhibitor Purvalanol A, or AKT/ PI3K inhibitor LY294002. (ABT-737- 100 nM+ Purvalanol A- 1 µM) (ABT-737- 100 nM+ LY294002- 2 µM). Three possible triple combinations were identified (LY2409881+Akti-1/2+Purvalanol A, SU9516+Akti-1/2+Purvalanol A, and ABT-737+LY2409881+Purvalanol A), which will be taken forward for examining their efficacy at varying concentrations and dosing schedules, across multiple paediatric AML cell lines for optimisation of maximum synergy. We believe that our combination screening approach has potential for future use with a larger cohort of drugs including FDA approved compounds and patient material.

Keywords: AML, drug repurposing, ABT-737, apoptosis

Procedia PDF Downloads 203