Search results for: non-linear exponential (NLINEX) loss function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9457

Search results for: non-linear exponential (NLINEX) loss function

1237 Thomas Kuhn, the Accidental Theologian: An Argument for the Similarity of Science and Religion

Authors: Dominic McGann

Abstract:

Applying Kuhn’s model of paradigm shifts in science to cases of doctrinal change in religion has been a common area of study in recent years. Few authors, however, have sought an explanation for the ease with which this model of theory change in science can be applied to cases of religious change. In order to provide such an explanation of this analytic phenomenon, this paper aims to answer one central question: Why is it that a theory that was intended to be used in an analysis of the history of science can be applied to something as disparate as the doctrinal history of religion with little to no modification? By way of answering this question, this paper begins with an explanation of Kuhn’s model and its applications in the field of religious studies. Following this, Massa’s recently proposed explanation for this phenomenon, and its notable flaws will be explained by way of framing the central proposal of this article, that the operative parts of scientific and religious changes function on the same fundamental concept of changes in understanding. Focusing its argument on this key concept, this paper seeks to illustrate its operation in cases of religious conversion and in Kuhn’s notion of the incommensurability of different scientific paradigms. The conjecture of this paper is that just as a Pagan-turned-Christian ceases to hear Thor’s hammer when they hear a clap of thunder, so too does a Ptolemaic-turned-Copernican-astronomer cease to see the Sun orbiting the Earth when they view a sunrise. In both cases, the agent in question has undergone a similar change in universal understanding, which provides us with a fundamental connection between changes in religion and changes in science. Following an exploration of this connection, this paper will consider the implications that such a connection has for the concept of the division between religion and science. This will, in turn, lead to the conclusion that religion and science are more alike than they are opposed with regards to the fundamental notion of understanding, thereby providing an answer to our central question. The major finding of this paper is that Kuhn’s model can be applied to religious cases so easily because changes in science and changes in religion operate on the same type of change in understanding. Therefore, in summary, science and religion share a crucial similarity and are not as disparate as they first appear.

Keywords: Thomas Kuhn, science and religion, paradigm shifts, incommensurability, insight and understanding, philosophy of science, philosophy of religion

Procedia PDF Downloads 169
1236 Catalytic Pyrolysis of Sewage Sludge for Upgrading Bio-Oil Quality Using Sludge-Based Activated Char as an Alternative to HZSM5

Authors: Ali Zaker, Zhi Chen

Abstract:

Due to the concerns about the depletion of fossil fuel sources and the deteriorating environment, the attempt to investigate the production of renewable energy will play a crucial role as a potential to alleviate the dependency on mineral fuels. One particular area of interest is the generation of bio-oil through sewage sludge (SS) pyrolysis. SS can be a potential candidate in contrast to other types of biomasses due to its availability and low cost. However, the presence of high molecular weight hydrocarbons and oxygenated compounds in the SS bio-oil hinders some of its fuel applications. In this context, catalytic pyrolysis is another attainable route to upgrade bio-oil quality. Among different catalysts (i.e., zeolites) studied for SS pyrolysis, activated chars (AC) are eco-friendly alternatives. The beneficial features of AC derived from SS comprise the comparatively large surface area, porosity, enriched surface functional groups, and presence of a high amount of metal species that can improve the catalytic activity. Hence, a sludge-based AC catalyst was fabricated in a single-step pyrolysis reaction with NaOH as the activation agent and was compared with HZSM5 zeolite in this study. The thermal decomposition and kinetics were invested via thermogravimetric analysis (TGA) for guidance and control of pyrolysis and catalytic pyrolysis and the design of the pyrolysis setup. The results indicated that the pyrolysis and catalytic pyrolysis contains four obvious stages, and the main decomposition reaction occurred in the range of 200-600°C. The Coats-Redfern method was applied in the 2nd and 3rd devolatilization stages to estimate the reaction order and activation energy (E) from the mass loss data. The average activation energy (Em) values for the reaction orders n = 1, 2, and 3 were in the range of 6.67-20.37 kJ for SS; 1.51-6.87 kJ for HZSM5; and 2.29-9.17 kJ for AC, respectively. According to the results, AC and HZSM5 both were able to improve the reaction rate of SS pyrolysis by abridging the Em value. Moreover, to generate and examine the effect of the catalysts on the quality of bio-oil, a fixed-bed pyrolysis system was designed and implemented. The composition analysis of the produced bio-oil was carried out via gas chromatography/mass spectrometry (GC/MS). The selected SS to catalyst ratios were 1:1, 2:1, and 4:1. The optimum ratio in terms of cracking the long-chain hydrocarbons and removing oxygen-containing compounds was 1:1 for both catalysts. The upgraded bio-oils with AC and HZSM5 were in the total range of C4-C17, with around 72% in the range of C4-C9. The bio-oil from pyrolysis of SS contained 49.27% oxygenated compounds, while with the presence of AC and HZSM5 dropped to 13.02% and 7.3%, respectively. Meanwhile, the generation of benzene, toluene, and xylene (BTX) compounds was significantly improved in the catalytic process. Furthermore, the fabricated AC catalyst was characterized by BET, SEM-EDX, FT-IR, and TGA techniques. Overall, this research demonstrated AC is an efficient catalyst in the pyrolysis of SS and can be used as a cost-competitive catalyst in contrast to HZSM5.

Keywords: catalytic pyrolysis, sewage sludge, activated char, HZSM5, bio-oil

Procedia PDF Downloads 178
1235 Understanding the Dynamics of Human-Snake Negative Interactions: A Study of Indigenous Perceptions in Tamil Nadu, Southern India

Authors: Ramesh Chinnasamy, Srishti Semalty, Vishnu S. Nair, Thirumurugan Vedagiri, Mahesh Ganeshan, Gautam Talukdar, Karthy Sivapushanam, Abhijit Das

Abstract:

Snakes form an integral component of ecological systems. Human population explosion and associated acceleration of habitat destruction and degradation, has led to a rapid increase in human-snake encounters. The study aims at understanding the level of awareness, knowledge, and attitude of the people towards human-snake negative interaction and role of awareness programmes in the Moyar river valley, Tamil Nadu. The study area is part of the Mudumalai and the Sathyamangalam Tiger Reserves, which are significant wildlife corridors between the Western Ghats and the Eastern Ghats in the Nilgiri Biosphere Reserve. The data was collected using questionnaire covering 644 respondents spread across 18 villages between 2018 and 2019. The study revealed that 86.5% of respondents had strong negative perceptions towards snakes which were propelled by fear, superstitions, and threat of snakebite which was common and did not vary among different villages (F=4.48; p = <0.05) and age groups (X2 = 1.946; p = 0.962). Cobra 27.8% (n = 294) and rat snake 21.3% (n = 225) were the most sighted species and most snake encounter occurred during the monsoon season i.e., July 35.6 (n = 218), June 19.1% (n = 117) and August 18.4% (n = 113). At least 1 out of 5 respondents was reportedly bitten by snakes during their lifetime. The most common species of snakes that were the cause of snakebite were Saw scaled viper (32.6%, n = 42) followed by Cobra 17.1% (n = 22). About 21.3% (n = 137) people reported livestock loss due to pythons and other snakes 21.3% (n = 137). Most people, preferred medical treatment for snakebite (87.3%), whereas 12.7%, still believed in traditional methods. The majority (82.3%) used precautionary measure by keeping traditional items such as garlic, kerosene, and snake plant to avoid snakes. About 30% of the respondents expressed need for technical and monetary support from the forest department that could aid in reducing the human-snake conflict. It is concluded that the general perception in the study area is driven by fear and negative attitude towards snakes. Though snakes such as Cobra were widely worshiped in the region, there are still widespread myths and misconceptions that have led to the irrational killing of snakes. Awareness and innovative education programs rooted in the local context and language should be integrated at the village level, to minimize risk and the associated threat of snakebite among the people. Results from this study shall help policy makers to devise appropriate conservation measures to reduce human-snake conflicts in India.

Keywords: Envenomation, Health-Education, Human-Wildlife Conflict, Neglected Tropical Disease, Snakebite Mitigation, Traditional Practitioners

Procedia PDF Downloads 224
1234 Household Food Security and Poverty Reduction in Cameroon

Authors: Bougema Theodore Ntenkeh, Chi-bikom Barbara Kyien

Abstract:

The reduction of poverty and hunger sits at the heart of the United Nations 2030 Agenda for Sustainable Development, and are the first two of the Sustainable Development Goals. The World Food Day celebrated on the 16th of October every year, highlights the need for people to have physical and economic access at all times to enough nutritious and safe food to live a healthy and active life; while the world poverty day celebrated on the 17th of October is an opportunity to acknowledge the struggle of people living in poverty, a chance for them to make their concerns heard, and for the community to recognize and support poor people in their fight against poverty. The association between household food security and poverty reduction is not only sparse in Cameroon but mostly qualitative. The paper therefore investigates the effect of household food security on poverty reduction in Cameroon quantitatively using data from the Cameroon Household Consumption Survey collected by the Government Statistics Office. The methodology employed five indicators of household food security using the Multiple Correspondence Analysis and poverty is captured as a dummy variable. Using a control function technique, with pre and post estimation test for robustness, the study postulates that household food security has a positive and significant effect on poverty reduction in Cameroon. A unit increase in the food security score reduces the probability of the household being poor by 31.8%, and this effect is statistically significant at 1%. The result further illustrates that the age of the household head and household size increases household poverty while households residing in urban areas are significantly less poor. The paper therefore recommends that households should diversify their food intake to enhance an effective supply of labour in the job market as a strategy to reduce household poverty. Furthermore, family planning methods should be encouraged as a strategy to reduce birth rate for an equitable distribution of household resources including food while the government of Cameroon should also develop the rural areas given that trend in urbanization are associated with the concentration of productive economic activities, leading to increase household income, increased household food security and poverty reduction.

Keywords: food security, poverty reduction, SDGs, Cameroon

Procedia PDF Downloads 76
1233 Effects of Kinesio Taping on Pain and Functions of Chronic Nonspecific Low Back Pain Patients

Authors: Ahmed Assem Abd El Rahim

Abstract:

BACKGROUND: Low back pain (LBP) is enormously common health problem& most of subjects experience it at some point of their life. Kinesio-taping is one of therapy methods introduced for studied cases with nonspecific low back pain. OBJECTIVES: to look at how Kinesio-taping affects studied cases with non-specific low back pain in terms of discomfort, range of motion, & back muscular strength. SUBJECTS: 40 mechanical LBP patients aged 20-40 years had been assigned haphazardly into two groups, They had been selected from outpatient clinic, KasrAl-AiniHospital, Cairo university. Methods: GroupA: 20 patients received the I-shape KT longitudinally & conventional physiotherapy program. Group B:20 studied cases received application of the KT Horizontally & conventional physiotherapy program. pain had been measured by visual analog scale, Range of motion had been measured by Roland Morris Disability Questionnaire (RMDQ), & strength had been measured by an isokinetic dynamometer before & after therapy. Therapy sessions had been three times weekly for four weeks. RESULTS: Groups (A & B) discovered decrease in pain& disability and rise in their flexion, extension ROM & peak torque of trunk extensor after end of 4 weeks of program. mean values of pain scale after therapy had been 3.7 and 5.04 in groups A & B. mean values of Disability scale after treatment had been 7.87.and 9.35 in groups A & B. mean values of ROM of flexion had been 28.06, and 24.53 in groups A & B. mean values of ROM of extension had been 13.43 & 10.73 in groups A & B. mean values of Peak torque of lumbar extensors were 65.43 and 63.22 in groups A & B. Though, participants who received the I-shape KT longitudinally as well as conventional physiotherapy program (group A), discovered more reduction in pain& disability and more improvement in ROM of flexion, extension, and Peak torque of lumbar extensors value (P<0.001) after therapy program CONCLUSION: Therapeutic longitudinal Kinesio-taping application with conventional physiotherapy will be more valuable than Therapeutic horizontal Kinesio-taping application with conventional physiotherapy when treating nonspecific low back pain studied cases.

Keywords: Kinesio taping, function, low back pain, muscle power

Procedia PDF Downloads 61
1232 Reading as Moral Afternoon Tea: An Empirical Study on the Compensation Effect between Literary Novel Reading and Readers’ Moral Motivation

Authors: Chong Jiang, Liang Zhao, Hua Jian, Xiaoguang Wang

Abstract:

The belief that there is a strong relationship between reading narrative and morality has generally become the basic assumption of scholars, philosophers, critics, and cultural critics. The virtuality constructed by literary novels inspires readers to regard the narrative as a thinking experiment, creating the distance between readers and events so that they can freely and morally experience the positions of different roles. Therefore, the virtual narrative combined with literary characteristics is always considered as a "moral laboratory." Well-established findings revealed that people show less lying and deceptive behaviors in the morning than in the afternoon, called the morning morality effect. As a limited self-regulation resource, morality will be constantly depleted with the change of time rhythm under the influence of the morning morality effect. It can also be compensated and restored in various ways, such as eating, sleeping, etc. As a common form of entertainment in modern society, literary novel reading gives people more virtual experience and emotional catharsis, just as a relaxing afternoon tea that helps people break away from fast-paced work, restore physical strength, and relieve stress in a short period of leisure. In this paper, inspired by the compensation control theory, we wonder whether reading literary novels in the digital environment could replenish a kind of spiritual energy for self-regulation to compensate for people's moral loss in the afternoon. Based on this assumption, we leverage the social annotation text content generated by readers in digital reading to represent the readers' reading attention. We then recognized the semantics and calculated the readers' moral motivation expressed in the annotations and investigated the fine-grained dynamics of the moral motivation changing in each time slot within 24 hours of a day. Comprehensively comparing the division of different time intervals, sufficient experiments showed that the moral motivation reflected in the annotations in the afternoon is significantly higher than that in the morning. The results robustly verified the hypothesis that reading compensates for moral motivation, which we called the moral afternoon tea effect. Moreover, we quantitatively identified that such moral compensation can last until 14:00 in the afternoon and 21:00 in the evening. In addition, it is interesting to find that the division of time intervals of different units impacts the identification of moral rhythms. Dividing the time intervals by four-hour time slot brings more insights of moral rhythms compared with that of three-hour and six-hour time slot.

Keywords: digital reading, social annotation, moral motivation, morning morality effect, control compensation

Procedia PDF Downloads 149
1231 Spatial Analysis in the Impact of Aquifer Capacity Reduction on Land Subsidence Rate in Semarang City between 2014-2017

Authors: Yudo Prasetyo, Hana Sugiastu Firdaus, Diyanah Diyanah

Abstract:

The phenomenon of the lack of clean water supply in several big cities in Indonesia is a major problem in the development of urban areas. Moreover, in the city of Semarang, the population density and growth of physical development is very high. Continuous and large amounts of underground water (aquifer) exposure can result in a drastically aquifer supply declining in year by year. Especially, the intensity of aquifer use in the fulfilment of household needs and industrial activities. This is worsening by the land subsidence phenomenon in some areas in the Semarang city. Therefore, special research is needed to know the spatial correlation of the impact of decreasing aquifer capacity on the land subsidence phenomenon. This is necessary to give approve that the occurrence of land subsidence can be caused by loss of balance of pressure on below the land surface. One method to observe the correlation pattern between the two phenomena is the application of remote sensing technology based on radar and optical satellites. Implementation of Differential Interferometric Synthetic Aperture Radar (DINSAR) or Small Baseline Area Subset (SBAS) method in SENTINEL-1A satellite image acquisition in 2014-2017 period will give a proper pattern of land subsidence. These results will be spatially correlated with the aquifer-declining pattern in the same time period. Utilization of survey results to 8 monitoring wells with depth in above 100 m to observe the multi-temporal pattern of aquifer change capacity. In addition, the pattern of aquifer capacity will be validated with 2 underground water cavity maps from observation of ministries of energy and natural resources (ESDM) in Semarang city. Spatial correlation studies will be conducted on the pattern of land subsidence and aquifer capacity using overlapping and statistical methods. The results of this correlation will show how big the correlation of decrease in underground water capacity in influencing the distribution and intensity of land subsidence in Semarang city. In addition, the results of this study will also be analyzed based on geological aspects related to hydrogeological parameters, soil types, aquifer species and geological structures. The results of this study will be a correlation map of the aquifer capacity on the decrease in the face of the land in the city of Semarang within the period 2014-2017. So hopefully the results can help the authorities in spatial planning and the city of Semarang in the future.

Keywords: aquifer, differential interferometric synthetic aperture radar (DINSAR), land subsidence, small baseline area subset (SBAS)

Procedia PDF Downloads 181
1230 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 169
1229 Structural Health Monitoring of Buildings–Recorded Data and Wave Method

Authors: Tzong-Ying Hao, Mohammad T. Rahmani

Abstract:

This article presents the structural health monitoring (SHM) method based on changes in wave traveling times (wave method) within a layered 1-D shear beam model of structure. The wave method measures the velocity of shear wave propagating in a building from the impulse response functions (IRF) obtained from recorded data at different locations inside the building. If structural damage occurs in a structure, the velocity of wave propagation through it changes. The wave method analysis is performed on the responses of Torre Central building, a 9-story shear wall structure located in Santiago, Chile. Because events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded at this building, therefore it can serve as a full-scale benchmark to validate the structural health monitoring method utilized. The analysis of inter-story drifts and the Fourier spectra for the EW and NS motions during 2010 Chile earthquake are presented. The results for the NS motions suggest the coupling of translation and torsion responses. The system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) were detected initially decreasing approximately 24% in the EW motion. Near the end of shaking, an increase of about 17% was detected. These analysis and results serve as baseline indicators of the occurrence of structural damage. The detected changes in wave velocities of the shear beam model are consistent with the observed damage. However, the 1-D shear beam model is not sufficient to simulate the coupling of translation and torsion responses in the NS motion. The wave method is proven for actual implementation in structural health monitoring systems based on carefully assessing the resolution and accuracy of the model for its effectiveness on post-earthquake damage detection in buildings.

Keywords: Chile earthquake, damage detection, earthquake response, impulse response function, shear beam model, shear wave velocity, structural health monitoring, torre central building, wave method

Procedia PDF Downloads 365
1228 A Methodology for Developing New Technology Ideas to Avoid Patent Infringement: F-Term Based Patent Analysis

Authors: Kisik Song, Sungjoo Lee

Abstract:

With the growing importance of intangible assets recently, the impact of patent infringement on the business of a company has become more evident. Accordingly, it is essential for firms to estimate the risk of patent infringement risk before developing a technology and create new technology ideas to avoid the risk. Recognizing the needs, several attempts have been made to help develop new technology opportunities and most of them have focused on identifying emerging vacant technologies from patent analysis. In these studies, the IPC (International Patent Classification) system or keywords from text-mining application to patent documents was generally used to define vacant technologies. Unlike those studies, this study adopted F-term, which classifies patent documents according to the technical features of the inventions described in them. Since the technical features are analyzed by various perspectives by F-term, F-term provides more detailed information about technologies compared to IPC while more systematic information compared to keywords. Therefore, if well utilized, it can be a useful guideline to create a new technology idea. Recognizing the potential of F-term, this paper aims to suggest a novel approach to developing new technology ideas to avoid patent infringement based on F-term. For this purpose, we firstly collected data about F-term and then applied text-mining to the descriptions about classification criteria and attributes. From the text-mining results, we could identify other technologies with similar technical features of the existing one, the patented technology. Finally, we compare the technologies and extract the technical features that are commonly used in other technologies but have not been used in the existing one. These features are presented in terms of “purpose”, “function”, “structure”, “material”, “method”, “processing and operation procedure” and “control means” and so are useful for creating new technology ideas that help avoid infringing patent rights of other companies. Theoretically, this is one of the earliest attempts to adopt F-term to patent analysis; the proposed methodology can show how to best take advantage of F-term with the wealth of technical information. In practice, the proposed methodology can be valuable in the ideation process for successful product and service innovation without infringing the patents of other companies.

Keywords: patent infringement, new technology ideas, patent analysis, F-term

Procedia PDF Downloads 265
1227 Effect of Packaging Material and Water-Based Solutions on Performance of Radio Frequency Identification for Food Packaging Applications

Authors: Amelia Frickey, Timothy (TJ) Sheridan, Angelica Rossi, Bahar Aliakbarian

Abstract:

The growth of large food supply chains demanded improved end-to-end traceability of food products, which has led to companies being increasingly interested in using smart technologies such as Radio Frequency Identification (RFID)-enabled packaging to track items. As technology is being widely used, there are several technological or economic issues that should be overcome to facilitate the adoption of this track-and-trace technology. One of the technological challenges of RFID technology is its sensitivity to different environmental form factors, including packaging materials and the content of the packaging. Although researchers have assessed the performance loss due to the proximity of water and aqueous solutions, there is still the need to further investigate the impacts of food products on the reading range of RFID tags. However, to the best of our knowledge, there are not enough studies to determine the correlation between RFID tag performance and food beverages properties. The goal of this project was to investigate the effect of the solution properties (pH and conductivity) and different packaging materials filled with food-like water-based solutions on the performance of an RFID tag. Three commercially available ultra high-frequency RFID tags were placed on three different bottles and filled with different concentrations of water-based solutions, including sodium chloride, citric acid, sucrose, and ethanol. Transparent glass, Polyethylneterephtalate (PET), and Tetrapak® were used as the packaging materials commonly used in the beverage industries. Tag readability (Theoretical Read Range, TRR) and sensitivity (Power on Tag Forward, PoF) were determined using an anechoic chamber. First, the best place to attach the tag for each packaging material was investigated using empty and water-filled bottles. Then, the bottles were filled with the food-like solutions and tested with the three different tags and the PoF and TRR at the fixed frequency of 915MHz. In parallel, the pH and conductivity of solutions were measured. The best-performing tag was then selected to test the bottles filled with wine, orange, and apple juice. Despite various solutions altering the performance of each tag, the change in tag performance had no correlation with the pH or conductivity of the solution. Additionally, packaging material played a significant role in tag performance. Each tag tested performed optimally under different conditions. This study is the first part of comprehensive research to determine the regression model for the prediction of tag performance behavior based on the packaging material and the content. More investigations, including more tags and food products, are needed to be able to develop a robust regression model. The results of this study can be used by RFID tag manufacturers to design suitable tags for specific products with similar properties.

Keywords: smart food packaging, supply chain management, food waste, radio frequency identification

Procedia PDF Downloads 112
1226 Caregivers Burden: Risk and Related Psychological Factors in Caregivers of Patients with Parkinson’s Disease

Authors: Pellecchia M. T., Savarese G., Carpinelli L., Calabrese M.

Abstract:

Introduction: Parkinson's disease (PD) is characterized by a progressive loss of autonomy which undoubtedly has a significant impact on the quality of life of caregivers, and parents are the main informal caregivers. Caring for a person with PD is associated with an increased risk of psychiatric morbidity and persistent anxiety-depressive distress. The aim of the study is to investigate the burden on caregivers of patients with PD, through the use of multidimensional scales and to identify their personological and environmental determinants. Methods: The study has been approved by the Ethic Committee of the University of Salerno and informed consent for participation to the study was obtained from patients and their caregivers. The study was conducted at the Neurology Department of the A.O.U. "San Giovanni di Dio and Ruggi D’Aragona" of Salerno between September 2020 and May 2021. Materials: The questionnaires used were: a) Caregiver Burden Inventory - CBI a questionnaire of 24 items that allow identifying five sub-categories of burden (objective, psychological, physical, social, emotional); b) Depression Anxiety Stress Scales Short Version - DASS-21 questionnaire consisting of 21 items and valid in examining three distinct but interrelated areas (depression, anxiety and stress); c) Family Strain Questionnaire Short Form - FSQ-SF is a questionnaire of 30 items grouped in areas of increasing psychological risk (OK, R, SR, U); d) Zarit Caregiver Burden Inventory - ZBI, consisting of 22 items based on the analysis of two main factors: personal stress and pressure related to his role; e) Life Satisfaction, a single item that aims to evaluate the degree of life satisfaction in a global way using a 0-100 Likert scale. Findings: N ° 29 caregivers (M age = 55.14, SD = 9.859; 69% F) participated in the study. 20.6% of the sample had severe and severe burden (CBI score = M = 26.31; SD = 22.43) and 13.8% of participants had moderate to severe burden (ZBI). The FSQ-SF highlighted a minority of caregivers who need psychological support, in some cases urgent (Area SR and Area U). The DASS-21 results show a prevalence of stress-related symptoms (M = 10.90, SD = 10.712) compared to anxiety (M = 7.52, SD = 10.752) and depression (M = 8, SD = 10.876). There are significant correlations between some specific variables and mean test scores: retired caregivers report higher ZBI scores (p = 0.423) and lower Life Satisfaction levels (p = -0.460) than working caregivers; years of schooling show a negative linear correlation with the ZBI score (p = -0.491). The T-Test indicates that caregivers of patients with cognitive impairment are at greater risk than those of patients without cognitive impairment. Conclusions: It knows the factors that affect the burden the most would allow for early recognition of risky situations and caregivers who would need adequate support.

Keywords: anxious-depressive axis, caregivers’ burden, Parkinson’ disease, psychological risks

Procedia PDF Downloads 213
1225 Evaluation of a Higher Diploma in Mental Health Nursing Using Qualitative and Quantitative Methods: Effects on Student Behavior, Attitude and Perception

Authors: T. Frawley, G. O'Kelly

Abstract:

The UCD School of Nursing, Midwifery and Health Systems Higher Diploma in Mental Health (HDMH) nursing programme commenced in January 2017. Forty students successfully completed the programme. Programme evaluation was conducted from the outset. Research ethics approval was granted by the UCD Human Research Ethics Committee – Sciences in November 2016 (LS-E-16-163). Plan for Sustainability: Each iteration of the programme continues to be evaluated and adjusted accordingly. Aims: The ultimate purpose of the HDMH programme is to prepare registered nurses (registered children’s nurse (RCN), registered nurse in intellectual disability (RNID) and registered general nurse (RGN)) to function as effective registered psychiatric nurses in all settings which provide care and treatment for people experiencing mental health difficulties. Curriculum evaluation is essential to ensure that the programme achieves its purpose, that aims and expected outcomes are met and that required changes are highlighted for the programme’s continuing positive development. Methods: Both quantitative and qualitative methods were used in the evaluation. A series of questionnaires were used (the majority pre and post programme) to determine student perceptions of the programme, behaviour and attitudinal change from commencement to completion. These included the student assessment of learning gains (SALG); mental health knowledge schedule (MAKS); mental health clinician attitudes scale (MICA); reported and intended behaviour scale (RIBS); and community attitudes towards the mentally ill (CAMI). In addition, student and staff focus groups were conducted. Evaluation methods also incorporated module feedback. Outcome/Results: The evaluation highlighted a very positive response in relation to the achievement of programme outcomes and preparation for future work as registered psychiatric nursing. Some areas were highlighted for further development, which have been taken cognisance of in the 2019 iteration of the programme.

Keywords: learning gains, mental health, nursing, stigma

Procedia PDF Downloads 137
1224 Design of Photonic Crystal with Defect Layer to Eliminate Interface Corrugations for Obtaining Unidirectional and Bidirectional Beam Splitting under Normal Incidence

Authors: Evrim Colak, Andriy E. Serebryannikov, Pavel V. Usik, Ekmel Ozbay

Abstract:

Working with a dielectric photonic crystal (PC) structure which does not include surface corrugations, unidirectional transmission and dual-beam splitting are observed under normal incidence as a result of the strong diffractions caused by the embedded defect layer. The defect layer has twice the period of the regular PC segments which sandwich the defect layer. Although the PC has even number of rows, the structural symmetry is broken due to the asymmetric placement of the defect layer with respect to the symmetry axis of the regular PC. The simulations verify that efficient splitting and occurrence of strong diffractions are related to the dispersion properties of the Floquet-Bloch modes of the photonic crystal. Unidirectional and bi-directional splitting, which are associated with asymmetric transmission, arise due to the dominant contribution of the first positive and first negative diffraction orders. The effect of the depth of the defect layer is examined by placing single defect layer in varying rows, preserving the asymmetry of PC. Even for deeply buried defect layer, asymmetric transmission is still valid even if the zeroth order is not coupled. This transmission is due to evanescent waves which reach to the deeply embedded defect layer and couple to higher order modes. In an additional selected performance, whichever surface is illuminated, i.e., in both upper and lower surface illumination cases, incident beam is split into two beams of equal intensity at the output surface where the intensity of the out-going beams are equal for both illumination cases. That is, although the structure is asymmetric, symmetric bidirectional transmission with equal transmission values is demonstrated and the structure mimics the behavior of symmetric structures. Finally, simulation studies including the examination of a coupled-cavity defect for two different permittivity values (close to the permittivity values of GaAs or Si and alumina) reveal unidirectional splitting for a wider band of operation in comparison to the bandwidth obtained in the case of a single embedded defect layer. Since the dielectric materials that are utilized are low-loss and weakly dispersive in a wide frequency range including microwave and optical frequencies, the studied structures should be scalable to the mentioned ranges.

Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality

Procedia PDF Downloads 182
1223 A Genre-Based Approach to the Teaching of Pronunciation

Authors: Marden Silva, Danielle Guerra

Abstract:

Some studies have indicated that pronunciation teaching hasn’t been paid enough attention by teachers regarding EFL contexts. In particular, segmental and suprasegmental features through genre-based approach may be an opportunity on how to integrate pronunciation into a more meaningful learning practice. Therefore, the aim of this project was to carry out a survey on some aspects related to English pronunciation that Brazilian students consider more difficult to learn, thus enabling the discussion of strategies that can facilitate the development of oral skills in English classes by integrating the teaching of phonetic-phonological aspects into the genre-based approach. Notions of intelligibility, fluency and accuracy were proposed by some authors as an ideal didactic sequence. According to their proposals, basic learners should be exposed to activities focused on the notion of intelligibility as well as intermediate students to the notion of fluency, and finally more advanced ones to accuracy practices. In order to test this hypothesis, data collection was conducted during three high school English classes at Federal Center for Technological Education of Minas Gerais (CEFET-MG), in Brazil, through questionnaires and didactic activities, which were recorded and transcribed for further analysis. The genre debate was chosen to facilitate the oral expression of the participants in a freer way, making them answering questions and giving their opinion about a previously selected topic. The findings indicated that basic students demonstrated more difficulty with aspects of English pronunciation than the others. Many of the intelligibility aspects analyzed had to be listened more than once for a better understanding. For intermediate students, the speeches recorded were considerably easier to understand, but nevertheless they found it more difficult to pronounce the words fluently, often interrupting their speech to think about what they were going to say and how they would talk. Lastly, more advanced learners seemed to express their ideas more fluently, but still subtle errors related to accuracy were perceptible in speech, thereby confirming the proposed hypothesis. It was also seen that using genre-based approach to promote oral communication in English classes might be a relevant method, considering the socio-communicative function inherent in the suggested approach.

Keywords: EFL, genre-based approach, oral skills, pronunciation

Procedia PDF Downloads 129
1222 The Noun-Phrase Elements on the Usage of the Zero Article

Authors: Wen Zhen

Abstract:

Compared to content words, function words have been relatively overlooked by English learners especially articles. The article system, to a certain extent, becomes a resistance to know English better, driven by different elements. Three principal factors can be summarized in term of the nature of the articles when referring to the difficulty of the English article system. However, making the article system more complex are difficulties in the second acquisition process, for [-ART] learners have to create another category, causing even most non-native speakers at proficiency level to make errors. According to the sequences of acquisition of the English article, it is showed that the zero article is first acquired and in high inaccuracy. The zero article is often overused in the early stages of L2 acquisition. Although learners at the intermediate level move to underuse the zero article for they realize that the zero article does not cover any case, overproduction of the zero article even occurs among advanced L2 learners. The aim of the study is to investigate noun-phrase factors which give rise to incorrect usage or overuse of the zero article, thus providing suggestions for L2 English acquisition. Moreover, it enables teachers to carry out effective instruction that activate conscious learning of students. The research question will be answered through a corpus-based, data- driven approach to analyze the noun-phrase elements from the semantic context and countability of noun-phrases. Based on the analysis of the International Thurber Thesis corpus, the results show that: (1) Although context of [-definite,-specific] favored the zero article, both[-definite,+specific] and [+definite,-specific] showed less influence. When we reflect on the frequency order of the zero article , prototypicality plays a vital role in it .(2)EFL learners in this study have trouble classifying abstract nouns as countable. We can find that it will bring about overuse of the zero article when learners can not make clear judgements on countability altered from (+definite ) to (-definite).Once a noun is perceived as uncountable by learners, the choice would fall back on the zero article. These findings suggest that learners should be engaged in recognition of the countability of new vocabulary by explaining nouns in lexical phrases and explore more complex aspects such as analysis dependent on discourse.

Keywords: noun phrase, zero article, corpus, second language acquisition

Procedia PDF Downloads 251
1221 Innovative Fabric Integrated Thermal Storage Systems and Applications

Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison

Abstract:

In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.

Keywords: fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration

Procedia PDF Downloads 165
1220 AAV-Mediated Human Α-Synuclein Expression in a Rat Model of Parkinson's Disease –Further Characterization of PD Phenotype, Fine Motor Functional Effects as Well as Neurochemical and Neuropathological Changes over Time

Authors: R. Pussinen, V. Jankovic, U. Herzberg, M. Cerrada-Gimenez, T. Huhtala, A. Nurmi, T. Ahtoniemi

Abstract:

Targeted over-expression of human α-synuclein using viral-vector mediated gene delivery into the substantia nigra of rats and non-human primates has been reported to lead to dopaminergic cell loss and the formation of α-synuclein aggregates reminiscent of Lewy bodies. We have previously shown how AAV-mediated expression of α-synuclein is seen in the chronic phenotype of the rats over 16 week follow-up period. In the context of these findings, we attempted to further characterize this long term PD related functional and motor deficits as well as neurochemical and neuropathological changes in AAV-mediated α-synuclein transfection model in rats during chronic follow-up period. Different titers of recombinant AAV expressing human α-synuclein (A53T) were stereotaxically injected unilaterally into substantia nigra of Wistar rats. Rats were allowed to recover for 3 weeks prior to initial baseline behavioral testing with rotational asymmetry test, stepping test and cylinder test. A similar behavioral test battery was applied again at weeks 5, 9,12 and 15. In addition to traditionally used rat PD model tests, MotoRater test system, a high speed kinematic gait performance monitoring was applied during the follow-up period. Evaluation focused on animal gait between groups. Tremor analysis was performed on weeks 9, 12 and 15. In addition to behavioral end-points, neurochemical evaluation of dopamine and its metabolites were evaluated in striatum. Furthermore, integrity of the dopamine active transport (DAT) system was evaluated by using 123I- β-CIT and SPECT/CT imaging on weeks 3, 8 and 12 after AAV- α-synuclein transfection. Histopathology was examined from end-point samples at 3 or 12 weeks after AAV- α-synuclein transfection to evaluate dopaminergic cell viability and microglial (Iba-1) activation status in substantia nigra by using stereological analysis techniques. This study focused on the characterization and validation of previously published AAV- α-synuclein transfection model in rats but with the addition of novel end-points. We present the long term phenotype of AAV- α-synuclein transfected rats with traditionally used behavioral tests but also by using novel fine motor analysis techniques and tremor analysis which provide new insight to unilateral effects of AAV α-synuclein transfection. We also present data about neurochemical and neuropathological end-points for the dopaminergic system in the model and how well they correlate with behavioral phenotype.

Keywords: adeno-associated virus, alphasynuclein, animal model, Parkinson’s disease

Procedia PDF Downloads 294
1219 Monoallelic and Biallelic Deletions of 13q14 in a Group of 36 CLL Patients Investigated by CGH Haematological Cancer and SNP Array (8x60K)

Authors: B. Grygalewicz, R. Woroniecka, J. Rygier, K. Borkowska, A. Labak, B. Nowakowska, B. Pienkowska-Grela

Abstract:

Introduction: Chronic lymphocytic leukemia (CLL) is the most common form of adult leukemia in the Western world. Hemizygous and or homozygous loss at 13q14 occur in more than half of cases and constitute the most frequent chromosomal abnormality in CLL. It is believed that deletions 13q14 play a role in CLL pathogenesis. Two microRNA genes miR-15a and miR- 16-1 are targets of 13q14 deletions and plays a tumor suppressor role by targeting antiapoptotic BCL2 gene. Deletion size, as a single change detected in FISH analysis, has haprognostic significance. Patients with small deletions, without RB1 gene involvement, have the best prognosis and the longest overall survival time (OS 133 months). In patients with bigger deletion region, containing RB1 gene, prognosis drops to intermediate, like in patients with normal karyotype and without changes in FISH with overall survival 111 months. Aim: Precise delineation of 13q14 deletions regions in two groups of CLL patients, with mono- and biallelic deletions and qualifications of their prognostic significance. Methods: Detection of 13q14 deletions was performed by FISH analysis with CLL probe panel (D13S319, LAMP1, TP53, ATM, CEP-12). Accurate deletion size detection was performed by CGH Haematological Cancer and SNP array (8x60K). Results: Our investigated group of CLL patients with the 13q14 deletion, detected by FISH analysis, comprised two groups: 18 patients with monoallelic deletions and 18 patients with biallelic deletions. In FISH analysis, in the monoallelic group the range of cells with deletion, was 43% to 97%, while in biallelic group deletion was detected in 11% to 94% of cells. Microarray analysis revealed precise deletion regions. In the monoallelic group, the range of size was 348,12 Kb to 34,82 Mb, with median deletion size 7,93 Mb. In biallelic group discrepancy of total deletions, size was 135,27 Kb to 33,33 Mb, with median deletion size 2,52 Mb. The median size of smaller deletion regions on one copy chromosome 13 was 1,08 Mb while the average region of bigger deletion on the second chromosome 13 was 4,04 Mb. In the monoallelic group, in 8/18 deletion region covered RB1 gene. In the biallelic group, in 4/18 cases, revealed deletion on one copy of biallelic deletion and in 2/18 showed deletion of RB1 gene on both deleted 13q14 regions. All minimal deleted regions included miR-15a and miR-16-1 genes. Genetic results will be correlated with clinical data. Conclusions: Application of CGH microarrays technique in CLL allows accurately delineate the size of 13q14 deletion regions, what have a prognostic value. All deleted regions included miR15a and miR-16-1, what confirms the essential role of these genes in CLL pathogenesis. In our investigated groups of CLL patients with mono- and biallelic 13q14 deletions, patients with biallelic deletion presented smaller deletion sizes (2,52 Mb vs 7,93 Mb), what is connected with better prognosis.

Keywords: CLL, deletion 13q14, CGH microarrays, SNP array

Procedia PDF Downloads 254
1218 Kriging-Based Global Optimization Method for Bluff Body Drag Reduction

Authors: Bingxi Huang, Yiqing Li, Marek Morzynski, Bernd R. Noack

Abstract:

We propose a Kriging-based global optimization method for active flow control with multiple actuation parameters. This method is designed to converge quickly and avoid getting trapped into local minima. We follow the model-free explorative gradient method (EGM) to alternate between explorative and exploitive steps. This facilitates a convergence similar to a gradient-based method and the parallel exploration of potentially better minima. In contrast to EGM, both kinds of steps are performed with Kriging surrogate model from the available data. The explorative step maximizes the expected improvement, i.e., favors regions of large uncertainty. The exploitive step identifies the best location of the cost function from the Kriging surrogate model for a subsequent weight-biased linear-gradient descent search method. To verify the effectiveness and robustness of the improved Kriging-based optimization method, we have examined several comparative test problems of varying dimensions with limited evaluation budgets. The results show that the proposed algorithm significantly outperforms some model-free optimization algorithms like genetic algorithm and differential evolution algorithm with a quicker convergence for a given budget. We have also performed direct numerical simulations of the fluidic pinball (N. Deng et al. 2020 J. Fluid Mech.) on three circular cylinders in equilateral-triangular arrangement immersed in an incoming flow at Re=100. The optimal cylinder rotations lead to 44.0% net drag power saving with 85.8% drag reduction and 41.8% actuation power. The optimal results for active flow control based on this configuration have achieved boat-tailing mechanism by employing Coanda forcing and wake stabilization by delaying separation and minimizing the wake region.

Keywords: direct numerical simulations, flow control, kriging, stochastic optimization, wake stabilization

Procedia PDF Downloads 105
1217 Development and Characterization of Novel Topical Formulation Containing Niacinamide

Authors: Sevdenur Onger, Ali Asram Sagiroglu

Abstract:

Hyperpigmentation is a cosmetically unappealing skin problem caused by an overabundance of melanin in the skin. Its pathophysiology is caused by melanocytes being exposed to paracrine melanogenic stimuli, which can upregulate melanogenesis-related enzymes (such as tyrosinase) and cause melanosome formation. Tyrosinase is linked to the development of melanosomes biochemically, and it is the main target of hyperpigmentation treatment. therefore, decreasing tyrosinase activity to reduce melanosomes has become the main target of hyperpigmentation treatment. Niacinamide (NA) is a natural chemical found in a variety of plants that is used as a skin-whitening ingredient in cosmetic formulations. NA decreases melanogenesis in the skin by inhibiting melanosome transfer from melanocytes to covering keratinocytes. Furthermore, NA protects the skin from reactive oxygen species and acts as a main barrier with the skin, reducing moisture loss by increasing ceramide and fatty acid synthesis. However, it is very difficult for hydrophilic compounds such as NA to penetrate deep into the skin. Furthermore, because of the nicotinic acid in NA, it is an irritant. As a result, we've concentrated on strategies to increase NA skin permeability while avoiding its irritating impacts. Since nanotechnology can affect drug penetration behavior by controlling the release and increasing the period of permanence on the skin, it can be a useful technique in the development of whitening formulations. Liposomes have become increasingly popular in the cosmetics industry in recent years due to benefits such as their lack of toxicity, high penetration ability in living skin layers, ability to increase skin moisture by forming a thin layer on the skin surface, and suitability for large-scale production. Therefore, liposomes containing NA were developed for this study. Different formulations were prepared by varying the amount of phospholipid and cholesterol and examined in terms of particle sizes, polydispersity index (PDI) and pH values. The pH values of the produced formulations were determined to be suitable with the pH value of the skin. Particle sizes were determined to be smaller than 250 nm and the particles were found to be of homogeneous size in the formulation (pdi<0.30). Despite the important advantages of liposomal systems, they have low viscosity and stability for topical use. For these reasons, in this study, liposomal cream formulations have been prepared for easy topical application of liposomal systems. As a result, liposomal cream formulations containing NA have been successfully prepared and characterized. Following the in-vitro release and ex-vivo diffusion studies to be conducted in the continuation of the study, it is planned to test the formulation that gives the most appropriate result on the volunteers after obtaining the approval of the ethics committee.

Keywords: delivery systems, hyperpigmentation, liposome, niacinamide

Procedia PDF Downloads 111
1216 A Damage-Plasticity Concrete Model for Damage Modeling of Reinforced Concrete Structures

Authors: Thanh N. Do

Abstract:

This paper addresses the modeling of two critical behaviors of concrete material in reinforced concrete components: (1) the increase in strength and ductility due to confining stresses from surrounding transverse steel reinforcements, and (2) the progressive deterioration in strength and stiffness due to high strain and/or cyclic loading. To improve the state-of-the-art, the author presents a new 3D constitutive model of concrete material based on plasticity and continuum damage mechanics theory to simulate both the confinement effect and the strength deterioration in reinforced concrete components. The model defines a yield function of the stress invariants and a compressive damage threshold based on the level of confining stresses to automatically capture the increase in strength and ductility when subjected to high compressive stresses. The model introduces two damage variables to describe the strength and stiffness deterioration under tensile and compressive stress states. The damage formulation characterizes well the degrading behavior of concrete material, including the nonsymmetric strength softening in tension and compression, as well as the progressive strength and stiffness degradation under primary and follower load cycles. The proposed damage model is implemented in a general purpose finite element analysis program allowing an extensive set of numerical simulations to assess its ability to capture the confinement effect and the degradation of the load-carrying capacity and stiffness of structural elements. It is validated against a collection of experimental data of the hysteretic behavior of reinforced concrete columns and shear walls under different load histories. These correlation studies demonstrate the ability of the model to describe vastly different hysteretic behaviors with a relatively consistent set of parameters. The model shows excellent consistency in response determination with very good accuracy. Its numerical robustness and computational efficiency are also very good and will be further assessed with large-scale simulations of structural systems.

Keywords: concrete, damage-plasticity, shear wall, confinement

Procedia PDF Downloads 169
1215 An Analysis of Possible Implications of Patent Term Extension in Pharmaceutical Sector on Indian Consumers

Authors: Anandkumar Rshindhe

Abstract:

Patents are considered as good monopoly in India. It is a mechanism by which the inventor is encouraged to do invention and also to make available to the society at large with a new useful technology. Patent system does not provide any protection to the invention itself but to the claims (rights) which the patentee has identified in relation to his invention. Thus the patentee is granted monopoly to the extent of his recognition of his own rights in the form of utilities and all other utilities of invention are for the public. Thus we find both benefit to the inventor and the public at large that is the ultimate consumer. But developing any such technology is not free of cost. Inventors do a lot of investment in the coming out with a new technologies. One such example if of Pharmaceutical industries. These pharmaceutical Industries do lot of research and invest lot of money, time and labour in coming out with these invention. Once invention is done or process identified, in order to protect it, inventors approach Patent system to protect their rights in the form of claim over invention. The patent system takes its own time in giving recognition to the invention as patent. Even after the grant of patent the pharmaceutical companies need to comply with many other legal formalities to launch it as a drug (medicine) in market. Thus major portion in patent term is unproductive to patentee and whatever limited period the patentee gets would be not sufficient to recover the cost involved in invention and as a result price of patented product is raised very much, just to recover the cost of invent. This is ultimately a burden on consumer who is paying more only because the legislature has failed to provide for the delay and loss caused to patentee. This problem can be effectively remedied if Patent Term extension is done. Due to patent term extension, the inventor gets some more time in recovering the cost of invention. Thus the end product is much more cheaper compared to non patent term extension.The basic question here arises is that when the patent period granted to a patentee is only 20 years and out of which a major portion is spent in complying with necessary legal formalities before making the medicine available in market, does the company with the limited period of monopoly recover its investment made for doing research. Further the Indian patent Act has certain provisions making it mandatory on the part of patentee to make its patented invention at reasonable affordable price in India. In the light of above questions whether extending the term of patent would be a proper solution and a necessary requirement to protect the interest of patentee as well as the ultimate consumer. The basic objective of this paper would be to check the implications of Extending the Patent term on Indian Consumers. Whether it provides the benefits to the patentee, consumer or a hardship to the Generic industry and consumer.

Keywords: patent term extention, consumer interest, generic drug industry, pharmaceutical industries

Procedia PDF Downloads 451
1214 Interbrain Synchronization and Multilayer Hyper brain Networks when Playing Guitar in Quartet

Authors: Viktor Müller, Ulman Lindenberger

Abstract:

Neurophysiological evidence suggests that the physiological states of the system are characterized by specific network structures and network topology dynamics, demonstrating a robust interplay between network topology and function. It is also evident that interpersonal action coordination or social interaction (e.g., playing music in duets or groups) requires strong intra- and interbrain synchronization resulting in a specific hyper brain network activity across two or more brains to support such coordination or interaction. Such complex hyper brain networks can be described as multiplex or multilayer networks that have a specific multidimensional or multilayer network organization characteristic for superordinate systems and their constituents. The aim of the study was to describe multilayer hyper brain networks and synchronization patterns of guitarists playing guitar in a quartet by using electroencephalography (EEG) hyper scanning (simultaneous EEG recording from multiple brains) and following time-frequency decomposition and multilayer network construction, where within-frequency coupling (WFC) represents communication within different layers, and cross-frequency coupling (CFC) depicts communication between these layers. Results indicate that communication or coupling dynamics, both within and between the layers across the brains of the guitarists, play an essential role in action coordination and are particularly enhanced during periods of high demands on musical coordination. Moreover, multilayer hyper brain network topology and dynamical structure of guitar sounds showed specific guitar-guitar, brain-brain, and guitar-brain causal associations, indicating multilevel dynamics with upward and downward causation, contributing to the superordinate system dynamics and hyper brain functioning. It is concluded that the neuronal dynamics during interpersonal interaction are brain-wide and frequency-specific with the fine-tuned balance between WFC and CFC and can best be described in terms of multilayer multi-brain networks with specific network topology and connectivity strengths. Further sophisticated research is needed to deepen our understanding of these highly interesting and complex phenomena.

Keywords: EEG hyper scanning, intra- and interbrain coupling, multilayer hyper brain networks, social interaction, within- and cross-frequency coupling

Procedia PDF Downloads 72
1213 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature

Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi

Abstract:

The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.

Keywords: hardness, powder metallurgy, spark plasma sintering, wear

Procedia PDF Downloads 271
1212 Infection Control Drill: To Assess the Readiness and Preparedness of Staffs in Managing Suspected Ebola Patients in Tan Tock Seng Hospital Emergency Department

Authors: Le Jiang, Chua Jinxing

Abstract:

Introduction: The recent outbreak of Ebola virus disease in the west Africa has drawn global concern. With a high fatality rate and direct human-to-human transmission, it has spread between countries and caused great damages for patients and family who are affected. Being the designated hospital to manage epidemic outbreak in Singapore, Tan Tock Seng Hospital (TTSH) is facing great challenges in preparation and managing of potential outbreak of emerging infectious disease such as Ebola virus disease. Aim: We conducted an infection control drill in TTSH emergency department to assess the readiness of healthcare and allied health workers in managing suspected Ebola patients. It also helps to review current Ebola clinical protocol and work instruction to ensure more smooth and safe practice in managing Ebola patients in TTSH emergency department. Result: General preparedness level of staffs involved in managing Ebola virus disease in TTSH emergency department is not adequate. Knowledge deficits of staffs on Ebola personal protective equipment gowning and degowning process increase the risk of potential cross contamination in patient care. Loopholes are also found in current clinical protocol, such as unclear instructions and inaccurate information, which need to be revised to promote better staff performance in patient management. Logistic issues such as equipment dysfunction and inadequate supplies can lead to ineffective communication among teams and causing harm to patients in emergency situation. Conclusion: The infection control drill identified the need for more well-structured and clear clinical protocols to be in place to promote participants performance. In addition to quality protocols and guidelines, systemic training and annual refresher for all staffs in the emergency department are essential to prepare staffs for the outbreak of Ebola virus disease. Collaboration and communication with allied health staffs are also crucial for smooth delivery of patient care and minimising the potential human suffering, properties loss or injuries caused by disease. Therefore, more clinical drills with collaboration among various departments involved are recommended to be conducted in the future to monitor and assess readiness of TTSH emergency department in managing Ebola virus disease.

Keywords: ebola, emergency department, infection control drill, Tan Tock Seng Hospital

Procedia PDF Downloads 120
1211 A Study on Accident Result Contribution of Individual Major Variables Using Multi-Body System of Accident Reconstruction Program

Authors: Donghun Jeong, Somyoung Shin, Yeoil Yun

Abstract:

A large-scale traffic accident refers to an accident in which more than three people die or more than thirty people are dead or injured. In order to prevent a large-scale traffic accident from causing a big loss of lives or establish effective improvement measures, it is important to analyze accident situations in-depth and understand the effects of major accident variables on an accident. This study aims to analyze the contribution of individual accident variables to accident results, based on the accurate reconstruction of traffic accidents using PC-Crash’s Multi-Body, which is an accident reconstruction program, and simulation of each scenario. Multi-Body system of PC-Crash accident reconstruction program is used for multi-body accident reconstruction that shows motions in diverse directions that were not approached previously. MB System is to design and reproduce a form of body, which shows realistic motions, using several bodies. Targeting the 'freight truck cargo drop accident around the Changwon Tunnel' that happened in November 2017, this study conducted a simulation of the freight truck cargo drop accident and analyzed the contribution of individual accident majors. Then on the basis of the driving speed, cargo load, and stacking method, six scenarios were devised. The simulation analysis result displayed that the freight car was driven at a speed of 118km/h(speed limit: 70km/h) right before the accident, carried 196 oil containers with a weight of 7,880kg (maximum load: 4,600kg) and was not fully equipped with anchoring equipment that could prevent a drop of cargo. The vehicle speed, cargo load, and cargo anchoring equipment were major accident variables, and the accident contribution analysis results of individual variables are as follows. When the freight car only obeyed the speed limit, the scattering distance of oil containers decreased by 15%, and the number of dropped oil containers decreased by 39%. When the freight car only obeyed the cargo load, the scattering distance of oil containers decreased by 5%, and the number of dropped oil containers decreased by 34%. When the freight car obeyed both the speed limit and cargo load, the scattering distance of oil containers fell by 38%, and the number of dropped oil containers fell by 64%. The analysis result of each scenario revealed that the overspeed and excessive cargo load of the freight car contributed to the dispersion of accident damage; in the case of a truck, which did not allow a fall of cargo, there was a different type of accident when driven too fast and carrying excessive cargo load, and when the freight car obeyed the speed limit and cargo load, there was the lowest possibility of causing an accident.

Keywords: accident reconstruction, large-scale traffic accident, PC-Crash, MB system

Procedia PDF Downloads 198
1210 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 168
1209 Differences in Production of Knowledge between Internationally Mobile versus Nationally Mobile and Non-Mobile Scientists

Authors: Valeria Aman

Abstract:

The presented study examines the impact of international mobility on knowledge production among mobile scientists and within the sending and receiving research groups. Scientists are relevant to the dynamics of knowledge production because scientific knowledge is mainly characterized by embeddedness and tacitness. International mobility enables the dissemination of scientific knowledge to other places and encourages new combinations of knowledge. It can also increase the interdisciplinarity of research by forming synergetic combinations of knowledge. Particularly innovative ideas can have their roots in related research domains and are sometimes transferred only through the physical mobility of scientists. Diversity among scientists with respect to their knowledge base can act as an engine for the creation of knowledge. It is therefore relevant to study how knowledge acquired through international mobility affects the knowledge production process. In certain research domains, international mobility may be essential to contextualize knowledge and to gain access to knowledge located at distant places. The knowledge production process contingent on the type of international mobility and the epistemic culture of a research field is examined. The production of scientific knowledge is a multi-faceted process, the output of which is mainly published in scholarly journals. Therefore, the study builds upon publication and citation data covered in Elsevier’s Scopus database for the period of 1996 to 2015. To analyse these data, bibliometric and social network analysis techniques are used. A basic analysis of scientific output using publication data, citation data and data on co-authored publications is combined with a content map analysis. Abstracts of publications indicate whether a research stay abroad makes an original contribution methodologically, theoretically or empirically. Moreover, co-citations are analysed to map linkages among scientists and emerging research domains. Finally, acknowledgements are studied that can function as channels of formal and informal communication between the actors involved in the process of knowledge production. The results provide better understanding of how the international mobility of scientists contributes to the production of knowledge, by contrasting the knowledge production dynamics of internationally mobile scientists with those being nationally mobile or immobile. Findings also allow indicating whether international mobility accelerates the production of knowledge and the emergence of new research fields.

Keywords: bibliometrics, diversity, interdisciplinarity, international mobility, knowledge production

Procedia PDF Downloads 291
1208 Transcranial Electric Field Treatments on Redox-Toxic Iron Deposits in Transgenic Alzheimer’s Disease Mouse Models: The Electroceutical Targeting of Alzheimer’s Disease

Authors: Choi Younshick, Lee Wonseok, Lee Jaemeun, Park Sun-Hyun, Kim Sunwoung, Park Sua, Kim Eun Ho, Kim Jong-Ki

Abstract:

Iron accumulation in the brain accelerates Alzheimer’s disease progression. To cure iron toxicity, we assessed the therapeutic effects of noncontact transcranial electric field stimulation to the brain on toxic iron deposits in either the Aβ-fibril structure or the Aβ plaque in a mouse model of Alzheimer’s disease (AD). A capacitive electrode-based alternating electric field (AEF) was applied to a suspension of magnetite (Fe₃O₄) to measure the field-sensitized electro-Fenton effect and resultant reactive oxygen species (ROS) generation. The increase in ROS generation compared to the untreated control was both exposure-time and AEF-frequency dependent. The frequency-specific exposure of AEF to 0.7–1.4 V/cm on a magnetite-bound Aβ-fibril or a transgenic Alzheimer’s disease (AD) mouse model revealed the removal of intraplaque ferrous magnetite iron deposit and Aβ-plaque burden together at the same time compared to the untreated control. The results of the behavioral tests show an improvement in impaired cognitive function following AEF treatment on the AD mouse model. Western blot assay found some disease-modifying biological responses, including down-regulating ferroptosis, neuroinflammation and reactive astrocytes that eventually made cognitive improvement feasible. Tissue clearing and 3D-imaging analysis revealed no induced damage to the neuronal structures of normal brain tissue following AEF treatment. In conclusion, our results suggest that the effective degradation of magnetite-bound amyloid fibrils or plaques in the AD brain by the electro-Fenton effect from electric field-sensitized magnetite offers a potential electroceutical treatment option for AD.

Keywords: electroceutical, intraplaque magnetite, alzheimer’s disease, transcranial electric field, electro-fenton effect

Procedia PDF Downloads 70