Search results for: “User acceptance of computer technology:A comparison of two theoretical models ”
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23353

Search results for: “User acceptance of computer technology:A comparison of two theoretical models ”

1543 Comparison of Several Peat Qualities as Amendment to Improve Afforestation of Mine Wastes

Authors: Marie Guittonny-LarchevêQue

Abstract:

In boreal Canada, industrial activities such as forestry, peat extraction and metal mines often occur nearby. At closure, mine waste storage facilities have to be reclaimed. On tailings storage facilities, tree plantations can achieve rapid restoration of forested landscapes. However, trees poorly grow in mine tailings and organic amendments like peat are required to improve tailings’ structure and nutrients. Canada is a well-known producer of horticultural quality peat, but some lower quality peats coming from areas adjacent to the reclaimed mines could allow successful revegetation. In particular, hemic peat coming from the bottom of peat-bogs is more decomposed than fibric peat and is less valued for horticulture. Moreover, forest peat is sometimes excavated and piled by the forest industry after cuttings to stimulate tree regeneration on the exposed mineral soil. The objective of this project was to compare the ability of peats of differing quality and origin to improve tailings structure, nutrients and tree development. A greenhouse experiment was conducted along one growing season in 2016 with a complete randomized block design combining 8 repetitions (blocks) x 2 tree species (Populus tremuloides and Pinus banksiana) x 6 substrates (tailings, commercial horticultural peat, and mixtures of tailings with commercial peat, forest peat, local fibric peat, or local hemic peat) x 2 fertilization levels (with or without mineral fertilization). The used tailings came from a gold mine and were low in sulfur and trace metals. The commercial peat had a slightly acidic pH (around 6) while other peats had a clearly acidic pH (around 3). However, mixing peat with slightly alkaline tailings resulted in a pH close to 7 whatever the tested peats. The macroporosity of mixtures was intermediate between the low values of tailings (4%) and the high values of commercial peat alone (34%). Seedling survival was lower on tailings for poplar compared to all other treatments, with or without fertilization. Survival and growth were similar among all treatments for pine. Fertilization had no impact on the maximal height and diameter of poplar seedlings but changed the relative performance of the substrates. When not fertilized, poplar seedlings grown in commercial peat were the highest and largest, and the smallest and slenderest in tailings, with intermediate values in mixtures. When fertilized, poplar seedlings grown in commercial peat were smaller and slender compared to all other substrates. However for this species, foliar, shoot, and root biomass production was the greatest in commercial peat and the lowest in tailings compared to all mixtures, whether fertilized or not. The mixture with local fibric peat provided the seedlings with the lowest foliar N concentrations compared to all other substrates whatever the species or the fertilization treatment. At the short-term, the performance of all the tested peats were close when mixed to tailings, showing that peats of lower quality could be valorized instead of using horticultural peat. These results demonstrate that intersectorial synergies in accordance with the principles of circular economy may be developed in boreal Canada between local industries around the reclamation of mine waste dumps.

Keywords: boreal trees, mine spoil, mine revegetation, intersectorial synergies

Procedia PDF Downloads 238
1542 Immunoliposome-Mediated Drug Delivery to Plasmodium-Infected and Non-Infected Red Blood Cells as a Dual Therapeutic/Prophylactic Antimalarial Strategy

Authors: Ernest Moles, Patricia Urbán, María Belén Jiménez-Díaz, Sara Viera-Morilla, Iñigo Angulo-Barturen, Maria Antònia Busquets, Xavier Fernàndez-Busquets

Abstract:

Bearing in mind the absence of an effective vaccine against malaria and its severe clinical manifestations causing nearly half a million deaths every year, this disease represents nowadays a major threat to life. Besides, the basic rationale followed by currently marketed antimalarial approaches is based on the administration of drugs on their own, promoting the emergence of drug-resistant parasites owing to the limitation in delivering drug payloads into the parasitized erythrocyte high enough to kill the intracellular pathogen while minimizing the risk of causing toxic side effects to the patient. Such dichotomy has been successfully addressed through the specific delivery of immunoliposome (iLP)-encapsulated antimalarials to Plasmodium falciparum-infected red blood cells (pRBCs). Unfortunately, this strategy has not progressed towards clinical applications, whereas in vitro assays rarely reach drug efficacy improvements above 10-fold. Here, we show that encapsulation efficiencies reaching >96% can be achieved for the weakly basic drugs chloroquine (CQ) and primaquine using the pH gradient active loading method in liposomes composed of neutrally charged, saturated phospholipids. Targeting antibodies are best conjugated through their primary amino groups, adjusting chemical crosslinker concentration to retain significant antigen recognition. Antigens from non-parasitized RBCs have also been considered as targets for the intracellular delivery of drugs not affecting the erythrocytic metabolism. Using this strategy, we have obtained unprecedented nanocarrier targeting to early intraerythrocytic stages of the malaria parasite for which there is a lack of specific extracellular molecular tags. Polyethylene glycol-coated liposomes conjugated with monoclonal antibodies specific for the erythrocyte surface protein glycophorin A (anti-GPA iLP) were capable of targeting 100% RBCs and pRBCs at the low concentration of 0.5 μM total lipid in the culture, with >95% of added iLPs retained into the cells. When exposed for only 15 min to P. falciparum in vitro cultures synchronized at early stages, free CQ had no significant effect over parasite viability up to 200 nM drug, whereas iLP-encapsulated 50 nM CQ completely arrested its growth. Furthermore, when assayed in vivo in P. falciparum-infected humanized mice, anti-GPA iLPs cleared the pathogen below detectable levels at a CQ dose of 0.5 mg/kg. In comparison, free CQ administered at 1.75 mg/kg was, at most, 40-fold less efficient. Our data suggest that this significant improvement in drug antimalarial efficacy is in part due to a prophylactic effect of CQ found by the pathogen in its host cell right at the very moment of invasion.

Keywords: immunoliposomal nanoparticles, malaria, prophylactic-therapeutic polyvalent activity, targeted drug delivery

Procedia PDF Downloads 357
1541 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning

Authors: Ioanna Taouki, Marie Lallier, David Soto

Abstract:

Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.

Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition

Procedia PDF Downloads 135
1540 Exhaust Gas Cleaning Systems on Board Ships and Impact on Crews’ Health: A Feasibility Study Protocol

Authors: Despoina Andrioti Bygvraa, Ida-Maja Hassellöv, George Charalambous

Abstract:

Exhaust gas cleaning systems, also known as scrubbers, are today widely used to allow for the use of High Sulphur Heavy Fuel Oil and still comply with the regulations limiting sulphur content in marine fuels. There are extensive concerns about environmental consequences, especially in the Baltic Sea, from the wide-scale use of scrubbers, as the wash water is acidic (ca pH 3) and contains high concentrations of toxic, carcinogenic, and mutagenic substances. The aim of this feasibility study is to investigate the potential adverse effects on seafarers’ health with the ultimate goal of raising awareness of chemical-related health and safety issues in the shipping environment. The project got funding from the Swedish Foundation. The team will extend previously compiled data on scrubber wash water concentrations of hazardous substances and pH to include the use of strong base in closed-loop scrubbers, and scoping assessment on handling and disposing practices. Based on the findings (a), a systematic review of risk assessment will follow to show the risk of exposures, the establishment of the hazardous levels for human health as well as the respective prevention practices. In addition, the researchers will perform (b) a systematic review to identify facilitators and barriers of the crew on compliance with the safe handling of chemicals. The study will run for 12 months, delivering (a) a risk assessment inventory with risk exposures and (b) a course description of safe handling practices. This feasibility study could provide valuable knowledge on how pollutants found in scrubbers should be considered from a human health perspective to facilitate evidence-based informed decisions in future technology- and policy development to make shipping a safer, healthier, and more attractive workplace.

Keywords: health and safety, seafarers, scrubbers, chemicals, risk exposures

Procedia PDF Downloads 34
1539 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping

Authors: Jie Xu, Zengshan Tian, Ze Li

Abstract:

Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.

Keywords: frequency hopping, phase error elimination, carrier phase, ranging

Procedia PDF Downloads 106
1538 Demonstration Operation of Distributed Power Generation System Based on Carbonized Biomass Gasification

Authors: Kunio Yoshikawa, Ding Lu

Abstract:

Small-scale, distributed and low-cost biomass power generation technologies are highly required in the modern society. There are big needs for these technologies in the disaster areas of developed countries and un-electrified rural areas of developing countries. This work aims to present a technical feasibility of the portable ultra-small power generation system based on the gasification of carbonized wood pellets/briquettes. Our project is designed for enabling independent energy production from various kinds of biomass resources in the open-field. The whole process mainly consists of two processes: biomass and waste pretreatment; gasification and power generation. The first process includes carbonization, densification (briquetting or pelletization), and the second includes updraft fixed bed gasification of carbonized pellets/briquettes, syngas purification, and power generation employing an internal combustion gas engine. A combined pretreatment processes including carbonization without external energy and densification were adopted to deal with various biomass. Carbonized pellets showed a better gasification performance than carbonized briquettes and their mixture. The 100-hour continuous operation results indicated that pelletization/briquetting of carbonized fuel realized the stable operation of an updraft gasifier if there were no blocking issues caused by the accumulation of tar. The cold gas efficiency and the carbon conversion during carbonized wood pellets gasification was about 49.2% and 70.5% with the air equivalence ratio value of around 0.32, and the corresponding overall efficiency of the gas engine was 20.3% during the stable stage. Moreover, the maximum output power was 21 kW at the air flow rate of 40 Nm³·h⁻¹. Therefore, the comprehensive system covering biomass carbonization, densification, gasification, syngas purification, and engine system is feasible for portable, ultra-small power generation. This work has been supported by Innovative Science and Technology Initiative for Security (Ministry of Defence, Japan).

Keywords: biomass carbonization, densification, distributed power generation, gasification

Procedia PDF Downloads 146
1537 Effectiveness of Gamified Virtual Physiotherapy Patients with Shoulder Problems

Authors: A. Barratt, M. H. Granat, S. Buttress, B. Roy

Abstract:

Introduction: Physiotherapy is an essential part of the treatment of patients with shoulder problems. The focus of treatment is usually centred on addressing specific physiotherapy goals, ultimately resulting in the improvement in pain and function. This study investigates if computerised physiotherapy using gamification principles are as effective as standard physiotherapy. Methods: Physiotherapy exergames were created using a combination of commercially available hardware, the Microsoft Kinect, and bespoke software. The exergames used were validated by mapping physiotherapy goals of physiotherapy which included; strength, range of movement, control, speed, and activation of the kinetic chain. A multicenter, randomised prospective controlled trial investigated the use of exergames on patients with Shoulder Impingement Syndrome who had undergone Arthroscopic Subacromial Decompression surgery. The intervention group was provided with the automated sensor-based technology, allowing them to perform exergames and track their rehabilitation progress. The control group was treated with standard physiotherapy protocols. Outcomes from different domains were used to compare the groups. An important metric was the assessment of shoulder range of movement pre- and post-operatively. The range of movement data included abduction, forward flexion and external rotation which were measured by the software, pre-operatively, 6 weeks and 12 weeks post-operatively. Results: Both groups show significant improvement from pre-operative to 12 weeks in elevation in forward flexion and abduction planes. Results for abduction showed an improvement for the interventional group (p < 0.015) as well as the test group (p < 0.003). Forward flexion improvement was interventional group (p < 0.0201) with the control group (p < 0.004). There was however no significant difference between the groups at 12 weeks for abduction (p < 0.118067) , forward flexion (p < 0.189755) or external rotation (p < 0.346967). Conclusion: Exergames may be used as an alternative to standard physiotherapy regimes; however, further analysis is required focusing on patient engagement.

Keywords: shoulder, physiotherapy, exergames, gamification

Procedia PDF Downloads 176
1536 Electroencephalogram during Natural Reading: Theta and Alpha Rhythms as Analytical Tools for Assessing a Reader’s Cognitive State

Authors: D. Zhigulskaya, V. Anisimov, A. Pikunov, K. Babanova, S. Zuev, A. Latyshkova, K. Сhernozatonskiy, A. Revazov

Abstract:

Electrophysiology of information processing in reading is certainly a popular research topic. Natural reading, however, has been relatively poorly studied, despite having broad potential applications for learning and education. In the current study, we explore the relationship between text categories and spontaneous electroencephalogram (EEG) while reading. Thirty healthy volunteers (mean age 26,68 ± 1,84) participated in this study. 15 Russian-language texts were used as stimuli. The first text was used for practice and was excluded from the final analysis. The remaining 14 were opposite pairs of texts in one of 7 categories, the most important of which were: interesting/boring, fiction/non-fiction, free reading/reading with an instruction, reading a text/reading a pseudo text (consisting of strings of letters that formed meaningless words). Participants had to read the texts sequentially on an Apple iPad Pro. EEG was recorded from 12 electrodes simultaneously with eye movement data via ARKit Technology by Apple. EEG spectral amplitude was analyzed in Fz for theta-band (4-8 Hz) and in C3, C4, P3, and P4 for alpha-band (8-14 Hz) using the Friedman test. We found that reading an interesting text was accompanied by an increase in theta spectral amplitude in Fz compared to reading a boring text (3,87 µV ± 0,12 and 3,67 µV ± 0,11, respectively). When instructions are given for reading, we see less alpha activity than during free reading of the same text (3,34 µV ± 0,20 and 3,73 µV ± 0,28, respectively, for C4 as the most representative channel). The non-fiction text elicited less activity in the alpha band (C4: 3,60 µV ± 0,25) than the fiction text (C4: 3,66 µV ± 0,26). A significant difference in alpha spectral amplitude was also observed between the regular text (C4: 3,64 µV ± 0,29) and the pseudo text (C4: 3,38 µV ± 0,22). These results suggest that some brain activity we see on EEG is sensitive to particular features of the text. We propose that changes in theta and alpha bands during reading may serve as electrophysiological tools for assessing the reader’s cognitive state as well as his or her attitude to the text and the perceived information. These physiological markers have prospective practical value for developing technological solutions and biofeedback systems for reading in particular and for education in general.

Keywords: EEG, natural reading, reader's cognitive state, theta-rhythm, alpha-rhythm

Procedia PDF Downloads 66
1535 Capital Accumulation and Unemployment in Namibia, Nigeria and South Africa

Authors: Abubakar Dikko

Abstract:

The research investigates the causes of unemployment in Namibia, Nigeria and South Africa, and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria and South Africa are great African nations battling with high unemployment rates. In 2013, the countries recorded high unemployment rates of 16.9%, 23.9% and 24.9% respectively. Most of the unemployed in these economies comprises of youth. Roughly about 40% working age South Africans has jobs, whereas in Nigeria and Namibia is less than that. Unemployment in Africa has wide implications on households which has led to extensive poverty and inequality, and created a rampant criminality. Recently in South Africa there has been a case of xenophobic attacks which were caused by the citizens of the country as a result of unemployment. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes that there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). The African countries with low level of capital accumulation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.

Keywords: capital accumulation, unemployment, NAIRU, Post-Keynesian economics

Procedia PDF Downloads 247
1534 Metal Layer Based Vertical Hall Device in a Complementary Metal Oxide Semiconductor Process

Authors: Se-Mi Lim, Won-Jae Jung, Jin-Sup Kim, Jun-Seok Park, Hyung-Il Chae

Abstract:

This paper presents a current-mode vertical hall device (VHD) structure using metal layers in a CMOS process. The proposed metal layer based vertical hall device (MLVHD) utilizes vertical connection among metal layers (from M1 to the top metal) to facilitate hall effect. The vertical metal structure unit flows a bias current Ibias from top to bottom, and an external magnetic field changes the current distribution by Lorentz force. The asymmetric current distribution can be detected by two differential-mode current outputs on each side at the bottom (M1), and each output sinks Ibias/2 ± Ihall. A single vertical metal structure generates only a small amount of hall effect of Ihall due to the short length from M1 to the top metal as well as the low conductivity of the metal, and a series connection between thousands of vertical structure units can solve the problem by providing NxIhall. The series connection between two units is another vertical metal structure flowing current in the opposite direction, and generates negative hall effect. To mitigate the negative hall effect from the series connection, the differential current outputs at the bottom (M1) from one unit merges on the top metal level of the other unit. The proposed MLVHD is simulated in a 3-dimensional model simulator in COMSOL Multiphysics, with 0.35 μm CMOS process parameters. The simulated MLVHD unit size is (W) 10 μm × (L) 6 μm × (D) 10 μm. In this paper, we use an MLVHD with 10 units; the overall hall device size is (W) 10 μm × (L)78 μm × (D) 10 μm. The COMSOL simulation result is as following: the maximum hall current is approximately 2 μA with a 12 μA bias current and 100mT magnetic field; This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIP) (No.R7117-16-0165, Development of Hall Effect Semiconductor for Smart Car and Device).

Keywords: CMOS, vertical hall device, current mode, COMSOL

Procedia PDF Downloads 285
1533 Marginal Productivity of Small Scale Yam and Cassava Farmers in Kogi State, Nigeria: Data Envelopment Analysis as a Complement

Authors: M. A. Ojo, O. A. Ojo, A. I. Odine, A. Ogaji

Abstract:

The study examined marginal productivity analysis of small scale yam and cassava farmers in Kogi State, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 150 randomly selected yam and cassava farmers from three Local Government Areas of the State. Description statistics, data envelopment analysis and Cobb-Douglas production function were used to analyze the data. The DEA result on the overall technical efficiency of the farmers showed that 40% of the sampled yam and cassava farmers in the study area were operating at frontier and optimum level of production with mean technical efficiency of 1.00. This implies that 60% of the yam and cassava farmers in the study area can still improve their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Cobb-Douglas analysis of factors affecting the output of yam and cassava farmers showed that labour, planting materials, fertilizer and capital inputs positively and significantly affected the output of the yam and cassava farmers in the study area. The study further revealed that yam and cassava farms in the study area operated under increasing returns to scale. This result of marginal productivity analysis further showed that relatively efficient farms were more marginally productive in resource utilization This study also shows that estimating production functions without separating the farms to efficient and inefficient farms bias the parameter values obtained from such production function. It is therefore recommended that yam and cassava farmers in the study area should form cooperative societies so as to enable them have access to productive inputs that will enable them expand. Also, since using a single equation model for production function produces a bias parameter estimates as confirmed above, farms should, therefore, be decomposed into efficient and inefficient ones before production function estimation is done.

Keywords: marginal productivity, DEA, production function, Kogi state

Procedia PDF Downloads 464
1532 The Effectiveness of Prenatal Breastfeeding Education on Breastfeeding Uptake Postpartum: A Systematic Review.

Authors: Jennifer Kehinde, Claire O'donnell, Annmarie Grealish

Abstract:

Introduction: Breastfeeding has been shown to provide numerous health benefits for both infants and mothers. The decision to breastfeed is influenced by physiological, psychological, and emotional factors. However, the importance of equipping mothers with the necessary knowledge for successful breastfeeding practice cannot be ruled out. The decline in global breastfeeding rate can be linked to lack of adequate breastfeeding education during prenatal stage.This systematic review examined the effectiveness of prenatal breastfeeding education on breastfeeding uptake postpartum. Method: This review was undertaken and reported in conformity with the Preferred Reporting Items for Systemic Reviews and Meta-Analysis statement (PRISMA) and was registered on the international prospective register for systematic reviews (PROSPERO: CRD42020213853). A PICO analysis (population, intervention, comparison, outcome) was undertaken to inform the choice of keywords in the search strategy to formulate the review question which was aimed at determining the effectiveness of prenatal breastfeeding educational programs at improving breastfeeding uptake following birth. A systematic search of five databases (Cumulative Index to Nursing and Allied Health Literature, Medline, Psych INFO, and Applied Social Sciences Index and Abstracts) were searched between January 2014 until July 2021 to identify eligible studies. Quality assessment and narrative synthesis were subsequently undertaken. Results: Fourteen studies were included. All 14 studies used different types of breastfeeding programs; eight used a combination of curriculum based breastfeeding education program, group prenatal breastfeeding counselling and one-to-one breastfeeding educational programs which were all delivered in person; four studies used web-based learning platforms to deliver breastfeeding education prenatally which were both delivered online and face to face over a period of 3 weeks to 2 months with follow-up periods ranging from 3 weeks to 6 months; one study delivered breastfeeding educational intervention using mother-to-mother breastfeeding support groups in promoting exclusive breastfeeding and one study disseminated breastfeeding education to participants based on the theory of planned behaviour. The most effective interventions were those that included both theory and hands-on demonstrations. Results showed an increase in breastfeeding uptake, breastfeeding knowledge, increase in positive attitude to breastfeeding and an increase in maternal breastfeeding self-efficacy among mothers who participated in breastfeeding educational programs during prenatal care. Conclusion: Prenatal breastfeeding education increases women’s knowledge of breastfeeding. Mothers who are knowledgeable about breastfeeding and hold a positive approach towards breastfeeding have the tendency to initiate breastfeeding and continue for a lengthened period. Findings demonstrates a general correlation between prenatal breastfeeding education and increased breastfeeding uptake postpartum. The high level of positive breastfeeding outcome inherent in all the studies can be attributed to prenatal breastfeeding education. This review provides rigorous contemporary evidence that healthcare professionals and policymakers can apply when developing effective strategies to improve breastfeeding rates and ultimately improve the health outcomes of mothers and infants.

Keywords: breastfeeding, breastfeeding programs, breastfeeding self-efficacy, prenatal breastfeedng education

Procedia PDF Downloads 41
1531 Finite Element Modeling of Aortic Intramural Haematoma Shows Size Matters

Authors: Aihong Zhao, Priya Sastry, Mark L Field, Mohamad Bashir, Arvind Singh, David Richens

Abstract:

Objectives: Intramural haematoma (IMH) is one of the pathologies, along with acute aortic dissection, that present as Acute Aortic Syndrome (AAS). Evidence suggests that unlike aortic dissection, some intramural haematomas may regress with medical management. However, intramural haematomas have been traditionally managed like acute aortic dissections. Given that some of these pathologies may regress with conservative management, it would be useful to be able to identify which of these may not need high risk emergency intervention. A computational aortic model was used in this study to try and identify intramural haematomas with risk of progression to aortic dissection. Methods: We created a computational model of the aorta with luminal blood flow. Reports in the literature have identified 11 mm as the radial clot thickness that is associated with heightened risk of progression of intramural haematoma. Accordingly, haematomas of varying sizes were implanted in the modeled aortic wall to test this hypothesis. The model was exposed to physiological blood flows and the stresses and strains in each layer of the aortic wall were recorded. Results: Size and shape of clot were seen to affect the magnitude of aortic stresses. The greatest stresses and strains were recorded in the intima of the model. When the haematoma exceeded 10 mm in all dimensions, the stress on the intima reached breaking point. Conclusion: Intramural clot size appears to be a contributory factor affecting aortic wall stress. Our computer simulation corroborates clinical evidence in the literature proposing that IMH diameter greater than 11 mm may be predictive of progression. This preliminary report suggests finite element modelling of the aortic wall may be a useful process by which to examine putative variables important in predicting progression or regression of intramural haematoma.

Keywords: intramural haematoma, acute aortic syndrome, finite element analysis,

Procedia PDF Downloads 415
1530 Intrusion Detection in SCADA Systems

Authors: Leandros A. Maglaras, Jianmin Jiang

Abstract:

The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.

Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection

Procedia PDF Downloads 528
1529 Modelling of Heat Transfer during Controlled Cooling of Thermo-Mechanically Treated Rebars Using Computational Fluid Dynamics Approach

Authors: Rohit Agarwal, Mrityunjay K. Singh, Soma Ghosh, Ramesh Shankar, Biswajit Ghosh, Vinay V. Mahashabde

Abstract:

Thermo-mechanical treatment (TMT) of rebars is a critical process to impart sufficient strength and ductility to rebar. TMT rebars are produced by the Tempcore process, involves an 'in-line' heat treatment in which hot rolled bar (temperature is around 1080°C) is passed through water boxes where it is quenched under high pressure water jets (temperature is around 25°C). The quenching rate dictates composite structure consisting (four non-homogenously distributed phases of rebar microstructure) pearlite-ferrite, bainite, and tempered martensite (from core to rim). The ferrite and pearlite phases present at core induce ductility to rebar while martensitic rim induces appropriate strength. The TMT process is difficult to model as it brings multitude of complex physics such as heat transfer, highly turbulent fluid flow, multicomponent and multiphase flow present in the control volume. Additionally the presence of film boiling regime (above Leidenfrost point) due to steam formation adds complexity to domain. A coupled heat transfer and fluid flow model based on computational fluid dynamics (CFD) has been developed at product technology division of Tata Steel, India which efficiently predicts temperature profile and percentage martensite rim thickness of rebar during quenching process. The model has been validated with 16 mm rolling of New Bar mill (NBM) plant of Tata Steel Limited, India. Furthermore, based on the scenario analyses, optimal configuration of nozzles was found which helped in subsequent increase in rolling speed.

Keywords: boiling, critical heat flux, nozzles, thermo-mechanical treatment

Procedia PDF Downloads 192
1528 Chemical and Physical Properties and Biocompatibility of Ti–6Al–4V Produced by Electron Beam Rapid Manufacturing and Selective Laser Melting for Biomedical Applications

Authors: Bing–Jing Zhao, Chang-Kui Liu, Hong Wang, Min Hu

Abstract:

Electron beam rapid manufacturing (EBRM) or Selective laser melting is an additive manufacturing process that uses 3D CAD data as a digital information source and energy in the form of a high-power laser beam or electron beam to create three-dimensional metal parts by fusing fine metallic powders together.Object:The present study was conducted to evaluate the mechanical properties ,the phase transformation,the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM,SLM and forging technique.Method: Ti-6Al-4V alloy standard test pieces were manufactured by EBRM, SLM and forging technique according to AMS4999,GB/T228 and ISO 10993.The mechanical properties were analyzed by universal test machine. The phase transformation was analyzed by X-ray diffraction and scanning electron microscopy. The corrosivity was analyzed by electrochemical method. The biocompatibility was analyzed by co-culturing with mesenchymal stem cell and analyzed by scanning electron microscopy (SEM) and alkaline phosphatase assay (ALP) to evaluate cell adhesion and differentiation, respectively. Results: The mechanical properties, the phase transformation, the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM、SLM were similar to forging and meet the mechanical property requirements of AMS4999 standard. a­phase microstructure for the EBM production contrast to the a’­phase microstructure of the SLM product. Mesenchymal stem cell adhesion and differentiation were well. Conclusion: The property of the Ti-6Al-4V alloy manufactured by EBRM and SLM technique can meet the medical standard from this study. But some further study should be proceeded in order to applying well in clinical practice.

Keywords: 3D printing, Electron Beam Rapid Manufacturing (EBRM), Selective Laser Melting (SLM), Computer Aided Design (CAD)

Procedia PDF Downloads 444
1527 Utilization of Biodiversity of Peaces Herbals Used as Food and Treat the Path of Economic Phu Sing District in Sisaket Province Thailand

Authors: Nopparet Thammasaranyakun

Abstract:

This research objects are: 1: To study the biodiversity of medicinal plants used for food and medicinal tourism economies along the Phu Sing district Sisaket province. 2: To study the use of medicinal plants used for food and medicinal tourism economies along the Phu Sing district Sisaket province. 3: To provide a database of information on biodiversity for food and medicinal plants and medicinal tourism economies along the Phu Sing district Sisaket province. 4: Learn to create a biodiversity of medicinal plants used as food and treatment by Journeys economic Phu Sing district Sisaket province Boundaries used in this study was the Phu Sing district. Population and Agricultural Development Center, rayong Mun due to the initiative for youth Local, Government Health officials, community leaders, teachers, students, schools, the local people and tourists. Sage wisdom to know the herbs and women's groups, OTOP Phu Sing district in SiisaKet province. By selecting the specific data that way. The process of participatory action research (PAR) is a community-based research. The method of collecting qualitative data. (Qualitative) tool is used from context, Community areas, interview and Taped recordings. Observation and focus group data was statistically analyzed using descriptive statistics (Descriptive Statistics). The results findings: 1- A study of the biodiversity of plants used for food and medicinal tourism economies along the Phu Sing district Sisaket province. Were used in the dry season and the rainy season find the medicinal plants of 251 species 41 types of drugs. 2- The study utilized medicinal plants used as food and the treatment of indigenous Phu Sing Sisaket province. Found 251 species have medicinal properties that are used for food and medicinal purposes 41 types of drugs. 3- Of the database technology of biodiversity for food and medicinal plants used by local treatment Phu Sing district Sisaket province. A data base of 251 medicinal species 41 types of drugs is used for food and medicinal properties Sisaket province. 4- learning the biodiversity of medicinal plants used for food and medicinal tourism economies along the Phu Sing district Sisaket province.

Keywords: utilization of biodiversity, peaces herbals, used as Food, Sing district, sisaket

Procedia PDF Downloads 339
1526 Cotton Fiber Quality Improvement by Introducing Sucrose Synthase (SuS) Gene into Gossypium hirsutum L.

Authors: Ahmad Ali Shahid, Mukhtar Ahmed

Abstract:

The demand for long staple fiber having better strength and length is increasing with the introduction of modern spinning and weaving industry in Pakistan. Work on gene discovery from developing cotton fibers has helped to identify dozens of genes that take part in cotton fiber development and several genes have been characterized for their role in fiber development. Sucrose synthase (SuS) is a key enzyme in the metabolism of sucrose in a plant cell, in cotton fiber it catalyzes a reversible reaction, but preferentially converts sucrose and UDP into fructose and UDP-glucose. UDP-glucose (UDPG) is a nucleotide sugar act as a donor for glucose residue in many glycosylation reactions and is essential for the cytosolic formation of sucrose and involved in the synthesis of cell wall cellulose. The study was focused on successful Agrobacterium-mediated stable transformation of SuS gene in pCAMBIA 1301 into cotton under a CaMV35S promoter. Integration and expression of the gene were confirmed by PCR, GUS assay, and real-time PCR. Young leaves of SuS overexpressing lines showed increased total soluble sugars and plant biomass as compared to non-transgenic control plants. Cellulose contents from fiber were significantly increased. SEM analysis revealed that fibers from transgenic cotton were highly spiral and fiber twist number increased per unit length when compared with control. Morphological data from field plants showed that transgenic plants performed better in field conditions. Incorporation of genes related to cotton fiber length and quality can provide new avenues for fiber improvement. The utilization of this technology would provide an efficient import substitution and sustained production of long-staple fiber in Pakistan to fulfill the industrial requirements.

Keywords: agrobacterium-mediated transformation, cotton fiber, sucrose synthase gene, staple length

Procedia PDF Downloads 216
1525 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 322
1524 Removal of VOCs from Gas Streams with Double Perovskite-Type Catalyst

Authors: Kuan Lun Pan, Moo Been Chang

Abstract:

Volatile organic compounds (VOCs) are one of major air contaminants, and they can react with nitrogen oxides (NOx) in atmosphere to form ozone (O3) and peroxyacetyl nitrate (PAN) with solar irradiation, leading to environmental hazards. In addition, some VOCs are toxic at low concentration levels and cause adverse effects on human health. How to effectively reduce VOCs emission has become an important issue. Thermal catalysis is regarded as an effective way for VOCs removal because it provides oxidation route to successfully convert VOCs into carbon dioxide (CO2) and water (H2O(g)). Single perovskite-type catalysts are promising for VOC removal, and they are of good potential to replace noble metals due to good activity and high thermal stability. Single perovskites can be generally described as ABO3 or A2BO4, where A-site is often a rare earth element or an alkaline. Typically, the B-site is transition metal cation (Fe, Cu, Ni, Co, or Mn). Catalytic properties of perovskites mainly rely on nature, oxidation states and arrangement of B-site cation. Interestingly, single perovskites could be further synthesized to form double perovskite-type catalysts which can simply be represented by A2B’B”O6. Likewise, A-site stands for an alkaline metal or rare earth element, and the B′ and B′′ are transition metals. Double perovskites possess unique surface properties. In structure, three-dimensional of B-site with ordered arrangement of B’O6 and B”O6 is presented alternately, and they corner-share octahedral along three directions of the crystal lattice, while cations of A-site position between the void of octahedral. It has attracted considerable attention due to specific arrangement of alternating B-site structure. Therefore, double perovskites may have more variations than single perovskites, and this greater variation may promote catalytic performance. It is expected that activity of double perovskites is higher than that of single perovskites toward VOC removal. In this study, double perovskite-type catalyst (La2CoMnO6) is prepared and evaluated for VOC removal. Also, single perovskites including LaCoO3 and LaMnO3 are tested for the comparison purpose. Toluene (C7H8) is one of the important VOCs which are commonly applied in chemical processes. In addition to its wide application, C7H8 has high toxicity at a low concentration. Therefore, C7H8 is selected as the target compound in this study. Experimental results indicate that double perovskite (La2CoMnO6) has better activity if compared with single perovskites. Especially, C7H8 can be completely oxidized to CO2 at 300oC as La2CoMnO6 is applied. Characterization of catalysts indicates that double perovskite has unique surface properties and is of higher amounts of lattice oxygen, leading to higher activity. For durability test, La2CoMnO6 maintains high C7H8 removal efficiency of 100% at 300oC and 30,000 h-1, and it also shows good resistance to CO2 (5%) and H2O(g) (5%) of gas streams tested. For various VOCs including isopropyl alcohol (C3H8O), ethanal (C2H4O), and ethylene (C2H4) tested, as high as 100% efficiency could be achieved with double perovskite-type catalyst operated at 300℃, indicating that double perovskites are promising catalysts for VOCs removal, and possible mechanisms will be elucidated in this paper.

Keywords: volatile organic compounds, Toluene (C7H8), double perovskite-type catalyst, catalysis

Procedia PDF Downloads 149
1523 Experimental Investigation on Tensile Durability of Glass Fiber Reinforced Polymer (GFRP) Rebar Embedded in High Performance Concrete

Authors: Yuan Yue, Wen-Wei Wang

Abstract:

The objective of this research is to comprehensively evaluate the impact of alkaline environments on the durability of Glass Fiber Reinforced Polymer (GFRP) reinforcements in concrete structures and further explore their potential value within the construction industry. Specifically, we investigate the effects of two widely used high-performance concrete (HPC) materials on the durability of GFRP bars when embedded within them under varying temperature conditions. A total of 279 GFRP bar specimens were manufactured for microcosmic and mechanical performance tests. Among them, 270 specimens were used to test the residual tensile strength after 120 days of immersion, while 9 specimens were utilized for microscopic testing to analyze degradation damage. SEM techniques were employed to examine the microstructure of GFRP and cover concrete. Unidirectional tensile strength experiments were conducted to determine the remaining tensile strength after corrosion. The experimental variables consisted of four types of concrete (engineering cementitious composite (ECC), ultra-high-performance concrete (UHPC), and two types of ordinary concrete with different compressive strengths) as well as three acceleration temperatures (20, 40, and 60℃). The experimental results demonstrate that high-performance concrete (HPC) offers superior protection for GFRP bars compared to ordinary concrete. Two types of HPC enhance durability through different mechanisms: one by reducing the pH of the concrete pore fluid and the other by decreasing permeability. For instance, ECC improves embedded GFRP's durability by lowering the pH of the pore fluid. After 120 days of immersion at 60°C under accelerated conditions, ECC (pH=11.5) retained 68.99% of its strength, while PC1 (pH=13.5) retained 54.88%. On the other hand, UHPC enhances FRP steel's durability by increasing porosity and compactness in its protective layer to reinforce FRP reinforcement's longevity. Due to fillers present in UHPC, it typically exhibits lower porosity, higher densities, and greater resistance to permeation compared to PC2 with similar pore fluid pH levels, resulting in varying degrees of durability for GFRP bars embedded in UHPC and PC2 after 120 days of immersion at a temperature of 60°C - with residual strengths being 66.32% and 60.89%, respectively. Furthermore, SEM analysis revealed no noticeable evidence indicating fiber deterioration in any examined specimens, thus suggesting that uneven stress distribution resulting from interface segregation and matrix damage emerges as a primary causative factor for tensile strength reduction in GFRP rather than fiber corrosion. Moreover, long-term prediction models were utilized to calculate residual strength values over time for reinforcement embedded in HPC under high temperature and high humidity conditions - demonstrating that approximately 75% of its initial strength was retained by reinforcement embedded in HPC after 100 years of service.

Keywords: GFRP bars, HPC, degeneration, durability, residual tensile strength.

Procedia PDF Downloads 39
1522 Implementation of Synthesis and Quality Control Procedures of ¹⁸F-Fluoromisonidazole Radiopharmaceutical

Authors: Natalia C. E. S. Nascimento, Mercia L. Oliveira, Fernando R. A. Lima, Leonardo T. C. do Nascimento, Marina B. Silveira, Brigida G. A. Schirmer, Andrea V. Ferreira, Carlos Malamut, Juliana B. da Silva

Abstract:

Tissue hypoxia is a common characteristic of solid tumors leading to decreased sensitivity to radiotherapy and chemotherapy. In the clinical context, tumor hypoxia assessment employing the positron emission tomography (PET) tracer ¹⁸F-fluoromisonidazole ([¹⁸F]FMISO) is helpful for physicians for planning and therapy adjusting. The aim of this work was to implement the synthesis of 18F-FMISO in a TRACERlab® MXFDG module and also to establish the quality control procedure. [¹⁸F]FMISO was synthesized at Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN/Brazil) using an automated synthesizer (TRACERlab® MXFDG, GE) adapted for the production of [¹⁸F]FMISO. The FMISO chemical standard was purchased from ABX. 18O- enriched water was acquired from Center of Molecular Research. Reagent kits containing eluent solution, acetonitrile, ethanol, 2.0 M HCl solution, buffer solution, water for injections and [¹⁸F]FMISO precursor (dissolved in 2 ml acetonitrile) were purchased from ABX. The [¹⁸F]FMISO samples were purified by Solid Phase Extraction method. The quality requirements of [¹⁸F]FMISO are established in the European Pharmacopeia. According to that reference, quality control of [¹⁸F]FMISO should include appearance, pH, radionuclidic identity and purity, radiochemical identity and purity, chemical purity, residual solvents, bacterial endotoxins, and sterility. The duration of the synthesis process was 53 min, with radiochemical yield of (37.00 ± 0.01) % and the specific activity was more than 70 GBq/µmol. The syntheses were reproducible and showed satisfactory results. In relation to the quality control analysis, the samples were clear and colorless at pH 6.0. The spectrum emission, measured by using a High-Purity Germanium Detector (HPGe), presented a single peak at 511 keV and the half-life, determined by the decay method in an activimeter, was (111.0 ± 0.5) min, indicating no presence of radioactive contaminants, besides the desirable radionuclide (¹⁸F). The samples showed concentration of tetrabutylammonium (TBA) < 50μg/mL, assessed by visual comparison to TBA standard applied in the same thin layer chromatographic plate. Radiochemical purity was determined by high performance liquid chromatography (HPLC) and the results were 100%. Regarding the residual solvents tested, ethanol and acetonitrile presented concentration lower than 10% and 0.04%, respectively. Healthy female mice were injected via lateral tail vein with [¹⁸F]FMISO, microPET imaging studies (15 min) were performed after 2 h post injection (p.i), and the biodistribution was analyzed in five-time points (30, 60, 90, 120 and 180 min) after injection. Subsequently, organs/tissues were assayed for radioactivity with a gamma counter. All parameters of quality control test were in agreement to quality criteria confirming that [¹⁸F]FMISO was suitable for use in non-clinical and clinical trials, following the legal requirements for the production of new radiopharmaceuticals in Brazil.

Keywords: automatic radiosynthesis, hypoxic tumors, pharmacopeia, positron emitters, quality requirements

Procedia PDF Downloads 179
1521 Estimating the Impact of Appliance Energy Efficiency Improvement on Residential Energy Demand in Tema City, Ghana

Authors: Marriette Sakah, Samuel Gyamfi, Morkporkpor Delight Sedzro, Christoph Kuhn

Abstract:

Ghana is experiencing rapid economic development and its cities command an increasingly dominant role as centers of both production and consumption. Cities run on energy and are extremely vulnerable to energy scarcity, energy price escalations and health impacts of very poor air quality. The overriding concern in Ghana and other West African states is bridging the gap between energy demand and supply. Energy efficiency presents a cost-effective solution for supply challenges by enabling more coverage with current power supply levels and reducing the need for investment in additional generation capacity and grid infrastructure. In Ghana, major issues for energy policy formulation in residential applications include lack of disaggregated electrical energy consumption data and lack of thorough understanding with regards to socio-economic influences on energy efficiency investment. This study uses a bottom up approach to estimate baseline electricity end-use as well as the energy consumption of best available technologies to enable estimation of energy-efficiency resource in terms of relative reduction in total energy use for Tema city, Ghana. A ground survey was conducted to assess the probable consumer behavior in response to energy efficiency initiatives to enable estimation of the amount of savings that would occur in response to specific policy interventions with regards to funding and incentives provision targeted at households. Results show that 16% - 54% reduction in annual electricity consumption is reasonably achievable depending on the level of incentives provision. The saved energy could supply 10000 - 34000 additional households if the added households use only best available technology. Political support and consumer awareness are necessary to translate energy efficiency resources into real energy savings.

Keywords: achievable energy savings, energy efficiency, Ghana, household appliances

Procedia PDF Downloads 199
1520 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering

Authors: Emiel Caron

Abstract:

Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.

Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics

Procedia PDF Downloads 177
1519 Development of Doctoral Education in Armenia (1990 - 2023)

Authors: Atom Mkhitaryan, Astghik Avetisyan

Abstract:

We analyze the developments of doctoral education in Armenia since 1990 and the management process. Education and training of highly qualified personnel are increasingly seen as a fundamental platform that ensures the development of the state. Reforming the national institute for doctoral studies (aspirantura) is aimed at improving the quality of human resources in science, optimizing research topics in accordance with the priority areas of development of science and technology, increasing publication and innovative activities, bringing national science and research closer to the world level and achieving international recognition. We present a number of defended dissertations in Armenia during the last 30 years, the dynamics and the main trends of the development of the academic degree awarding system. We discuss the possible impact of reforming the system of training and certification of highly qualified personnel on the organization of third–level doctoral education (doctoral schools) and specialized / dissertation councils in Armenia. The results of the SWOT analysis of doctoral education and academic degree awarding processes in Armenia are shown. The article presents the main activities and projects aimed at using the advantages and strong points of the National Academy network in order to improve the quality of doctoral education and training. The paper explores the mechanisms of organizational, methodological and infrastructural support for research and innovation activities of doctoral students and young scientists. There are also suggested approaches to the organization of strong networking between research institutes and foreign universities for training and certification of highly qualified personnel. The authors define the role of ISEC in the management of doctoral studies and the establishment of a competitive third-level education for the sphere of research and development in Armenia.

Keywords: doctoral studies, academic degree, PhD, certification, highly qualified personnel, dissertation, research and development, innovation, networking, management of doctoral school

Procedia PDF Downloads 49
1518 Collaborative Stylistic Group Project: A Drama Practical Analysis Application

Authors: Omnia F. Elkommos

Abstract:

In the course of teaching stylistics to undergraduate students of the Department of English Language and Literature, Faculty of Arts and Humanities, the linguistic tool kit of theories comes in handy and useful for the better understanding of the different literary genres: Poetry, drama, and short stories. In the present paper, a model of teaching of stylistics is compiled and suggested. It is a collaborative group project technique for use in the undergraduate diverse specialisms (Literature, Linguistics and Translation tracks) class. Students initially are introduced to the different linguistic tools and theories suitable for each literary genre. The second step is to apply these linguistic tools to texts. Students are required to watch videos performing the poems or play, for example, and search the net for interpretations of the texts by other authorities. They should be using a template (prepared by the researcher) that has guided questions leading students along in their analysis. Finally, a practical analysis would be written up using the practical analysis essay template (also prepared by the researcher). As per collaborative learning, all the steps include activities that are student-centered addressing differentiation and considering their three different specialisms. In the process of selecting the proper tools, the actual application and analysis discussion, students are given tasks that request their collaboration. They also work in small groups and the groups collaborate in seminars and group discussions. At the end of the course/module, students present their work also collaboratively and reflect and comment on their learning experience. The module/course uses a drama play that lends itself to the task: ‘The Bond’ by Amy Lowell and Robert Frost. The project results in an interpretation of its theme, characterization and plot. The linguistic tools are drawn from pragmatics, and discourse analysis among others.

Keywords: applied linguistic theories, collaborative learning, cooperative principle, discourse analysis, drama analysis, group project, online acting performance, pragmatics, speech act theory, stylistics, technology enhanced learning

Procedia PDF Downloads 155
1517 Sustainable Membranes Based on 2D Materials for H₂ Separation and Purification

Authors: Juan A. G. Carrio, Prasad Talluri, Sergio G. Echeverrigaray, Antonio H. Castro Neto

Abstract:

Hydrogen as a fuel and environmentally pleasant energy carrier is part of this transition towards low-carbon systems. The extensive deployment of hydrogen production, purification and transport infrastructures still represents significant challenges. Independent of the production process, the hydrogen generally is mixed with light hydrocarbons and other undesirable gases that need to be removed to obtain H₂ with the required purity for end applications. In this context, membranes are one of the simplest, most attractive, sustainable, and performant technologies enabling hydrogen separation and purification. They demonstrate high separation efficiencies and low energy consumption levels in operation, which is a significant leap compared to current energy-intensive options technologies. The unique characteristics of 2D laminates have given rise to a diversity of research on their potential applications in separation systems. Specifically, it is already known in the scientific literature that graphene oxide-based membranes present the highest reported selectivity of H₂ over other gases. This work explores the potential of a new type of 2D materials-based membranes in separating H₂ from CO₂ and CH₄. We have developed nanostructured composites based on 2D materials that have been applied in the fabrication of membranes to maximise H₂ selectivity and permeability, for different gas mixtures, by adjusting the membranes' characteristics. Our proprietary technology does not depend on specific porous substrates, which allows its integration in diverse separation modules with different geometries and configurations, looking to address the technical performance required for industrial applications and economic viability. The tuning and precise control of the processing parameters allowed us to control the thicknesses of the membranes below 100 nanometres to provide high permeabilities. Our results for the selectivity of new nanostructured 2D materials-based membranes are in the range of the performance reported in the available literature around 2D materials (such as graphene oxide) applied to hydrogen purification, which validates their use as one of the most promising next-generation hydrogen separation and purification solutions.

Keywords: membranes, 2D materials, hydrogen purification, nanocomposites

Procedia PDF Downloads 104
1516 Adaptability of Steel-Framed Industrialized Building System In Post-Service Life

Authors: Alireza Taghdiri, Sara Ghanbarzade Ghomi

Abstract:

Existing buildings are permanently subjected to change, continuously renovated and repaired in their long service life. Old buildings are destroyed and their material and components are recycled or reused for constructing new ones. In this process, the importance of sustainability principles for building construction is obviously known and great significance must be attached to the consumption of resources, resulting effects on the environment and economic costs. Utilization strategies for extending buildings service life and delay in destroying have a positive effect on environment protection. In addition, simpler alterability or expandability of buildings’ structures and reducing energy and natural resources consumption have benefits for users, producers and the environment. To solve these problems, by applying theories of open building, structural components of some conventional building systems have been analyzed and then, a new geometry adaptive building system is developed which can transform and support different imposed loads. In order to achieve this goal, various research methods and tools such as professional and scientific literatures review, comparative analysis, case study and computer simulation were applied and data interpretation was implemented using descriptive statistics and logical arguments. Therefore, hypothesis and proposed strategies were evaluated and an adaptable and reusable 2-dimensional building system was presented which can respond appropriately to dwellers and end-users needs and provide reusability of structural components of building system in new construction or function. Investigations showed that this incremental building system can be successfully applied in achieving the architectural design objectives and by small modifications on components and joints, it is easy to obtain different and adaptable load-optimized component alternatives for flexible spaces.

Keywords: adaptability, durability, open building, service life, structural building system

Procedia PDF Downloads 418
1515 The Effect of Penalizing Wrong Answers in the Computerized Modified Multiple Choice Testing System

Authors: Min Hae Song, Jooyong Park

Abstract:

Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests. Multiple-choice tests (MC) are efficient in automatic grading, however structural problems of multiple-choice tests allow students to find the correct answer from options even though they do not know the answer. A computerized modified multiple-choice testing system (CMMT) was developed using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. This study was conducted to find out whether penalizing for wrong answers in CMMT could lower random guessing. In this study, we checked whether students knew the answers by having them respond to the short-answer tests before choosing the given options in CMMT or MC format. Ninety-four students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: two conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. In the low penalty condition, the penalty rate was the probability of getting the correct answer by random guessing. In the high penalty condition, students were penalized at twice the percentage of the low penalty condition. The results showed that the number of no response was significantly higher for the CMMT format and the number of random guesses was significantly lower for the CMMT format. There were no significant between the two penalty conditions. This result may be due to the fact that the actual score difference between the two conditions was too small. In the discussion, the possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed.

Keywords: computerized modified multiple choice test format, multiple-choice test format, penalizing, test format

Procedia PDF Downloads 157
1514 The Confluence between Autism Spectrum Disorder and the Schizoid Personality

Authors: Murray David Schane

Abstract:

Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.

Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions

Procedia PDF Downloads 101