Search results for: positive response rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17861

Search results for: positive response rate

461 Managing Human-Wildlife Conflicts Compensation Claims Data Collection and Payments Using a Scheme Administrator

Authors: Eric Mwenda, Shadrack Ngene

Abstract:

Human-wildlife conflicts (HWCs) are the main threat to conservation in Africa. This is because wildlife needs overlap with those of humans. In Kenya, about 70% of wildlife occurs outside protected areas. As a result, wildlife and human range overlap, causing HWCs. The HWCs in Kenya occur in the drylands adjacent to protected areas. The top five counties with the highest incidences of HWC are Taita Taveta, Narok, Lamu, Kajiado, and Laikipia. The common wildlife species responsible for HWCs are elephants, buffaloes, hyenas, hippos, leopards, baboons, monkeys, snakes, and crocodiles. To ensure individuals affected by the conflicts are compensated, Kenya has developed a model of HWC compensation claims data collection and payment. We collected data on HWC from all eight Kenya Wildlife Service (KWS) Conservation Areas from 2009 to 2019. Additional data was collected from stakeholders' consultative workshops held in the Conservation Areas and a literature review regarding payment of injuries and ongoing insurance schemes being practiced in areas. This was followed by the description of the claims administration process and calculation of the pricing of the compensation claims. We further developed a digital platform for data capture and processing of all reported conflict cases and payments. Our product recognized four categories of HWC (i.e., human death and injury, property damage, crop destruction, and livestock predation). Personal bodily injury and human death were provided based on the Continental Scale of Benefits. We proposed a maximum of Kenya Shillings (KES) 3,000,000 for death. Medical, pharmaceutical, and hospital expenses were capped at a maximum of KES 150,000, as well as funeral costs at KES 50,000. Pain and suffering were proposed to be paid for 12 months at the rate of KES 13,500 per month. Crop damage was to be based on farm input costs at a maximum of KES 150,000 per claim. Livestock predation leading to death was based on Tropical Livestock Unit (TLU), which is equivalent to KES 30,000, whick includes Cattle (1 TLU = KES 30,000), Camel (1.4 TLU = KES 42,000), Goat (0.15 TLU = 4,500), Sheep (0.15 TLU = 4,500), and Donkey (0.5 TLU = KES 15,000). Property destruction (buildings, outside structures and harvested crops) was capped at KES 150,000 per any one claim. We conclude that it is possible to use an administrator to collect data on HWC compensation claims and make payments using technology. The success of the new approach will depend on a piloting program. We recommended that a pilot scheme be initiated for eight months in Taita Taveta, Kajiado, Baringo, Laikipia, Narok, and Meru Counties. This will test the claims administration process as well as harmonize data collection methods. The results of this pilot will be crucial in adjusting the scheme before country-wide roll out.

Keywords: human-wildlife conflicts, compensation, human death and injury, crop destruction, predation, property destruction

Procedia PDF Downloads 27
460 India’s Energy Transition, Pathways for Green Economy

Authors: B. Sudhakara Reddy

Abstract:

In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.

Keywords: energy, renewables, green technology, scenario

Procedia PDF Downloads 224
459 Psychological Distress during the COVID-19 Pandemic in Nursing Students: A Mixed-Methods Study

Authors: Mayantoinette F. Watson

Abstract:

During such an unprecedented time of the largest public health crisis, the COVID-19 pandemic, nursing students are of the utmost concern regarding their psychological and physical well-being. Questions are emerging and circulating about what will happen to the nursing students and the long-term effects of the pandemic, especially now that hospitals are being overwhelmed with a significant need for nursing staff. Expectations, demands, change, and the fear of the unknown during this unprecedented time can only contribute to the many stressors that accompany nursing students through laborious clinical and didactic courses in nursing programs. The risk of psychological distress is at a maximum, and its effects can negatively impact not only nursing students but also nursing education and academia. The high exposures to interpersonal, economic, and academic demands contribute to the major health concerns, which include a potential risk for psychological distress. Achievement of educational success among nursing students is directly affected by the high exposure to anxiety and depression from experiences within the program. Working relationships and achieving academic success is imperative to positive student outcomes within the nursing program. The purpose of this study is to identify and establish influences and associations within multilevel factors, including the effects of the COVID-19 pandemic on psychological distress in nursing students. Neuman’s Systems Model Theory was used to determine nursing students’ responses to internal and external stressors. The research in this study utilized a mixed-methods, convergent study design. The study population included undergraduate nursing students from Southeastern U.S. The research surveyed a convenience sample of undergraduate nursing students. The quantitative survey was completed by 202 participants, and 11 participants participated in the qualitative follow-up interview surveys. Participants completed the Kessler Psychological Distress Scale (K6), the Perceived Stress Scale (PSS4), and the Dundee Readiness Educational Environment Scale (DREEM12) to measure psychological distress, perceived stress, and perceived educational environment. Participants also answered open-ended questions regarding their experience during the COVID-19 pandemic. Statistical tests, including bivariate analyses, multiple linear regression analyses, and binary logistics regression analyses were performed in effort to identify and highlight the effects of independent variables on the dependent variable, psychological distress. Coding and qualitative content analysis were performed to identify overarching themes within participants’ interviews. Quantitative data were sufficient in identifying correlations between psychological distress and multilevel factors of coping, marital status, COVID-19 stress, perceived stress, educational environment, and social support in nursing students. Qualitative data were sufficient in identifying common themes of students’ perceptions during COVID-19 and included online learning, workload, finances, experience, breaks, time, unknown, support, encouragement, unchanged, communication, and transmission. The findings are significant, specifically regarding contributing factors to nursing students’ psychological distress, which will help to improve learning in the academic environment.

Keywords: nursing education, nursing students, pandemic, psychological distress

Procedia PDF Downloads 63
458 Strategies for Conserving Ecosystem Functions of the Aravalli Range to Combat Land Degradation: Case of Kishangarh and Tijara Tehsil in Rajasthan, India

Authors: Saloni Khandelwal

Abstract:

The Aravalli hills are one of the oldest and most distinctive mountain chains of peninsular India spanning in around 692 Km. More than 60% of it falls in the state of Rajasthan and influences ecological equilibrium in about 30% of the state. Because of natural and human-induced activities, physical gaps in the Aravallis are increasing, new gaps are coming up, and its physical structure is changing. There are no strict regulations to protect and monitor the Aravallis and no comprehensive research and study has been done for the enhancement of ecosystem functions of these ranges. Through this study, various factors leading to Aravalli’s degradation are identified and its impacts on selected areas are analyzed. A literature study is done to identify factors responsible for the degradation. To understand the severity of the problem at the lowest level, two tehsils from different districts in Rajasthan, which are the most affected due to illegal mining and increasing physical gaps are selected for the study. Case-1 of three-gram panchayats in Kishangarh Tehsil of Ajmer district focuses on the expanding physical gaps in the Aravalli range, and case-2 of three-gram panchayats in Tijara Tehsil of Alwar district focuses on increasing illegal mining in the Aravalli range. For measuring the degradation, physical, biological and social indicators are identified through literature review and for both the cases analysis is done on the basis of these indicators. Primary survey and focus group discussions are done with villagers, mining owners, illegal miners, and various government officials to understand dependency of people on the Aravalli and its importance to them along with the impact of degradation on their livelihood and environment. From the analysis, it has been found that green cover is continuously decreasing in both cases, dense forest areas do not exist now, the groundwater table is depleting at a very fast rate, soil is losing its moisture resulting in low yield and shift in agriculture. Wild animals which were easily seen earlier are now extinct. Cattles of villagers are dependent on the forest area in the Aravalli range for food, but with a decrease in fodder, their cattle numbers are decreasing. There is a decrease in agricultural land and an increase in scrub and salt-affected land. Analysis of various national and state programmes, acts which were passed to conserve biodiversity has been done showing that none of them is helping much to protect the Aravalli. For conserving the Aravalli and its forest areas, regional level and local level initiatives are required and are proposed in this study. This study is an attempt to formulate conservation and management strategies for the Aravalli range. These strategies will help in improving biodiversity which can lead to the revival of its ecosystem functions. It will also help in curbing the pollution at the regional and local level. All this will lead to the sustainable development of the region.

Keywords: Aravalli, ecosystem, LULC, Rajasthan

Procedia PDF Downloads 110
457 A Systematic Review Regarding Caregiving Relationships of Adolescents Orphaned by Aids and Primary Caregivers

Authors: M. Petunia Tsweleng

Abstract:

Statement of the Problem: Research and aid organisations report that children and adolescents orphaned due to HIV and AIDS are particularly vulnerable as they are often exposed to negative effects of both HIV and AIDS and orphanhood. Without much-needed parental love, care, and support, these children and adolescents are at risk of poor developmental outcomes. A cursory look at the available literature on AIDS-orphaned adolescents, and the quality of caregiving relationships with caregivers, shows that this is a relatively under-researched terrain. This article is a review of the literature on caregiving relationships of adolescents orphaned due to AIDS and their current primary caregivers. It aims to inform community programmes and policymakers by providing insight into the qualities of these relationships. Methodology: A comprehensive search of both peer-reviewed and non-peer-reviewed literature was conducted through EBSCOhost, SpringLINK, PsycINFO, SAGE, PubMed, Elsevier ScienceDirect, JSTOR, Wiley Online Library databases, and Google Scholar. The combination of keywords used for the search were: (caregiving relationships); (orphans OR AIDS orphaned children OR AIDS orphaned adolescents); (primary caregivers); and (quality caregiving); (orphans); (HIV and AIDS). The search took place between 24 January and 28 February 2022. Both qualitative and quantitative research studies published between 2010 and 2020 were reviewed. However, only qualitative studies were selected in the end -as they presented more profound findings concerning orphan-caregiver relationships. The following three stages of meta-synthesis analysis were used to analyse data: refutational syntheses, reciprocal syntheses, and line of argument. Results: The search resulted in a total of 2090 titles, of which 750 were duplicates and therefore subtracted. The researcher reviewed all the titles and abstracts of the remaining 1340 articles. 329 articles were identified as relevant, and full texts were reviewed. Following the review of the full texts, 313 studies were excluded for relevance and 4 for methodology. Twelve articles representing 11 studies fulfilled the inclusion criteria and were selected. These studies, representing different countries across the globe, reported similar forms of hardships experienced by caregivers economically, psychosocially, and healthwise. However, the studies also show that the majority of caregivers found contentment in caring for orphans, particularly grandmother carers, and were thus enabled to provide love, care, and support despite hardships. This resulted in positive caregiving relationships -as orphans fared well emotionally and psychosocially. Some relationships, however, were found negative due to unhealed emotional wounds suffered by both caregivers and orphans and others due to the caregiver’s lack of interest in providing care. These findings were based on self-report data from both orphans and caregivers. Conclusion: Findings suggest that intervention efforts need to be intensified to: alleviate poverty in households that are affected by HIV and AIDS pandemic, strengthen the community psychosocial support programmes for orphans and their caregivers; and integrate clinical services with community programmes for the healing of emotional and psychological wounds. Contributions: Findings inform community programmes and policymakers by providing insight into the qualities of the mentioned relationships as well as identifying factors commonly associated with high-quality caregiving and poor-quality caregiving.

Keywords: systematic review, caregiving relationships, orphans and primary caregivers, AIDS

Procedia PDF Downloads 137
456 Academic Major, Gender, and Perceived Helpfulness Predict Help-Seeking Stigma

Authors: Tran Tran

Abstract:

Mental health issues are prevalent among Vietnamese undergraduate students, and they are greatly exacerbated during the COVID-19 pandemic for this population. While there is empirical evidence supporting the effectiveness and efficiency of therapy on mental health issues among college students, the rates of Vietnamese college students seeking professional mental health services were alarmingly low. Multiple factors can prevent those in need from finding support. The Internalized Stigma Model posits that public stigma directly affects intentions to seek psychological help via self-stigma and attitudes toward seeking help. However, little research has focused on what factors can predict public stigma toward seeking professional psychological support, especially among this population. A potential predictor is academic majors since academic majors can influence undergraduate students' perceptions, attitudes, and intentions. A study suggested that students who have completed two or more psychology courses have a more positive attitude toward seeking care for mental health issues and reduced stigma, which might be attributed to increased mental health literacy. In addition, research has shown that women are more likely to utilize mental health services and have lower stigma than men. Finally, studies have also suggested that experience of mental health services can increase endorsement of perceived need and lower stigma. Thus, it is expected that perceived helpfulness from past service uses can reduce stigma. This study aims to address this gap in the literature and investigate which factors can predict public stigma, specifically academic major, gender, and perceived helpfulness, potentially suggesting an avenue of prevention and ultimately improving the well-being of Vietnamese college students. The sample includes 408 undergraduate students (Mage = 20.44; 80.88% female) Hanoi city, Vietnam. Participants completed a pen-and-paper questionnaire. Students completed the Stigma Scale for Receiving Psychological Help, which yielded a mean public stigma score. Participants also completed a measurement assessing their perceived helpfulness of their university’s counseling center, which included eight subscales: future self-development, learning issues, career counseling, medical and health issues, mental health issues, conflicts between teachers and students, conflicts between parents and students, and interpersonal relationships. Items were summed to create a composite perceived helpfulness score. Finally, participants provided demographic information. This included gender, which was dichotomized between female and other. Additionally, it included academic major, which was also similarly dichotomized between psychology and other (e.g., natural science, social science, and pedagogy & social work). Linear relationships between public stigma and gender, academic major, and perceived helpfulness were analyzed individually with a regression model. Findings suggested that academic major, gender, and perceived counseling center's helpfulness predicted stigma against seeking professional psychological help. Specifically, being a psychology major predicted lower levels of public stigma (β = -.25, p < .001). Additionally, gender female predicted lower levels of public stigma (β = -.11, p < .05). Lastly, higher levels of perceived helpfulness of the counseling center also predicted lower levels of public stigma (β = -.16, p < .01). The study’s results offer potential intervention avenues to help reduce stigma and increase well-being for Vietnamese college students.

Keywords: stigma, vietnamese college students, counseling services, help-seeking

Procedia PDF Downloads 63
455 Decreased Tricarboxylic Acid (TCA) Cycle Staphylococcus aureus Increases Survival to Innate Immunity

Authors: Trenten Theis, Trevor Daubert, Kennedy Kluthe, Austin Nuxoll

Abstract:

Staphylococcus aureus is a gram-positive bacterium responsible for an estimated 23,000 deaths in the United States and 25,000 deaths in the European Union annually. Recurring S. aureus bacteremia is associated with biofilm-mediated infections and can occur in 5 - 20% of cases, even with the use of antibiotics. Despite these infections being caused by drug-susceptible pathogens, they are surprisingly difficult to eradicate. One potential explanation for this is the presence of persister cells—a dormant type of cell that shows a high tolerance to antibiotic treatment. Recent studies have shown a connection between low intracellular ATP and persister cell formation. Specifically, this decrease in ATP, and therefore increase in persister cell formation, is due to an interrupted tricarboxylic acid (TCA) cycle. However, S. aureus persister cells’ role in pathogenesis remains unclear. Initial studies have shown that a fumC (TCA cycle gene) knockout survives challenge from aspects of the innate immune system better than wild-type S. aureus. Specifically, challenges from two antimicrobial peptides--LL-37 and hBD-3—show a log increase in survival of the fumC::N∑ strain compared to wild type S. aureus after 18 hours. Furthermore, preliminary studies show that the fumC knockout has a log more survival within a macrophage. These data lead us to hypothesize that the fumC knockout is better suited to other aspects of the innate immune system compared to wild-type S. aureus. To further investigate the mechanism for increased survival of fumC::N∑ within a macrophage, we tested bacterial growth in the presence of reactive oxygen species (ROS), reactive nitrogen species (RNS), and a low pH. Preliminary results suggest that the fumC knockout has increased growth compared to wild-type S. aureus in the presence of all three antimicrobial factors; however, no difference was observed in any single factor alone. To investigate survival within a host, a nine-day biofilm-associated catheter infection was performed on 6–8-week-old male and female C57Bl/6 mice. Although both sexes struggled to clear the infection, female mice were trending toward more frequently clearing the HG003 wild-type infection compared to the fumC::N∑ infection. One possible reason for the inability to reduce the bacterial burden is that biofilms are largely composed of persister cells. To test this hypothesis further, flow cytometry in conjunction with a persister cell marker was used to measure persister cells within a biofilm. Cap5A (a known persister cell marker) expression was found to be increased in a maturing biofilm, with the lowest levels of expression seen in immature biofilms and the highest expression exhibited by the 48-hour biofilm. Additionally, bacterial cells in a biofilm state closely resemble persister cells and exhibit reduced membrane potential compared to cells in planktonic culture, further suggesting biofilms are largely made up of persister cells. These data may provide an explanation as to why infections caused by antibiotic-susceptible strains remain difficult to treat.

Keywords: antibiotic tolerance, Staphylococcus aureus, host-pathogen interactions, microbial pathogenesis

Procedia PDF Downloads 157
454 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 121
453 Impact of Displacements Durations and Monetary Costs on the Labour Market within a City Consisting on Four Areas a Theoretical Approach

Authors: Aboulkacem El Mehdi

Abstract:

We develop a theoretical model at the crossroads of labour and urban economics, used for explaining the mechanism through which the duration of home-workplace trips and their monetary costs impact the labour demand and supply in a spatially scattered labour market and how they are impacted by a change in passenger transport infrastructures and services. The spatial disconnection between home and job opportunities is referred to as the spatial mismatch hypothesis (SMH). Its harmful impact on employment has been subject to numerous theoretical propositions. However, all the theoretical models proposed so far are patterned around the American context, which is particular as it is marked by racial discrimination against blacks in the housing and the labour markets. Therefore, it is only natural that most of these models are developed in order to reproduce a steady state characterized by agents carrying out their economic activities in a mono-centric city in which most unskilled jobs being created in the suburbs, far from the Blacks who dwell in the city-centre, generating a high unemployment rates for blacks, while the White population resides in the suburbs and has a low unemployment rate. Our model doesn't rely on any racial discrimination and doesn't aim at reproducing a steady state in which these stylized facts are replicated; it takes the main principle of the SMH -the spatial disconnection between homes and workplaces- as a starting point. One of the innovative aspects of the model consists in dealing with a SMH related issue at an aggregate level. We link the parameters of the passengers transport system to employment in the whole area of a city. We consider here a city that consists of four areas: two of them are residential areas with unemployed workers, the other two host firms looking for labour force. The workers compare the indirect utility of working in each area with the utility of unemployment and choose between submitting an application for the job that generate the highest indirect utility or not submitting. This arbitration takes account of the monetary and the time expenditures generated by the trips between the residency areas and the working areas. Each of these expenditures is clearly and explicitly formulated so that the impact of each of them can be studied separately than the impact of the other. The first findings show that the unemployed workers living in an area benefiting from good transport infrastructures and services have a better chance to prefer activity to unemployment and are more likely to supply a higher 'quantity' of labour than those who live in an area where the transport infrastructures and services are poorer. We also show that the firms located in the most accessible area receive much more applications and are more likely to hire the workers who provide the highest quantity of labour than the firms located in the less accessible area. Currently, we are working on the matching process between firms and job seekers and on how the equilibrium between the labour demand and supply occurs.

Keywords: labour market, passenger transport infrastructure, spatial mismatch hypothesis, urban economics

Procedia PDF Downloads 261
452 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 353
451 A Digital Environment for Developing Mathematical Abilities in Children with Autism Spectrum Disorder

Authors: M. Isabel Santos, Ana Breda, Ana Margarida Almeida

Abstract:

Research on academic abilities of individuals with autism spectrum disorder (ASD) underlines the importance of mathematics interventions. Yet the proposal of digital applications for children and youth with ASD continues to attract little attention, namely, regarding the development of mathematical reasoning, being the use of the digital technologies an area of great interest for individuals with this disorder and its use is certainly a facilitative strategy in the development of their mathematical abilities. The use of digital technologies can be an effective way to create innovative learning opportunities to these students and to develop creative, personalized and constructive environments, where they can develop differentiated abilities. The children with ASD often respond well to learning activities involving information presented visually. In this context, we present the digital Learning Environment on Mathematics for Autistic children (LEMA) that was a research project conducive to a PhD in Multimedia in Education and was developed by the Thematic Line Geometrix, located in the Department of Mathematics, in a collaboration effort with DigiMedia Research Center, of the Department of Communication and Art (University of Aveiro, Portugal). LEMA is a digital mathematical learning environment which activities are dynamically adapted to the user’s profile, towards the development of mathematical abilities of children aged 6–12 years diagnosed with ASD. LEMA has already been evaluated with end-users (both students and teacher’s experts) and based on the analysis of the collected data readjustments were made, enabling the continuous improvement of the prototype, namely considering the integration of universal design for learning (UDL) approaches, which are of most importance in ASD, due to its heterogeneity. The learning strategies incorporated in LEMA are: (i) provide options to custom choice of math activities, according to user’s profile; (ii) integrates simple interfaces with few elements, presenting only the features and content needed for the ongoing task; (iii) uses a simple visual and textual language; (iv) uses of different types of feedbacks (auditory, visual, positive/negative reinforcement, hints with helpful instructions including math concept definitions, solved math activities using split and easier tasks and, finally, the use of videos/animations that show a solution to the proposed activity); (v) provides information in multiple representation, such as text, video, audio and image for better content and vocabulary understanding in order to stimulate, motivate and engage users to mathematical learning, also helping users to focus on content; (vi) avoids using elements that distract or interfere with focus and attention; (vii) provides clear instructions and orientation about tasks to ease the user understanding of the content and the content language, in order to stimulate, motivate and engage the user; and (viii) uses buttons, familiarly icons and contrast between font and background. Since these children may experience little sensory tolerance and may have an impaired motor skill, besides the user to have the possibility to interact with LEMA through the mouse (point and click with a single button), the user has the possibility to interact with LEMA through Kinect device (using simple gesture moves).

Keywords: autism spectrum disorder, digital technologies, inclusion, mathematical abilities, mathematical learning activities

Procedia PDF Downloads 95
450 A Comparative Analysis on the Impact of the Prevention and Combating of Hate Crimes and Hate Speech Bill of 2016 on the Rights to Human Dignity, Equality, and Freedom in South Africa

Authors: Tholaine Matadi

Abstract:

South Africa is a democratic country with a historical record of racially-motivated marginalisation and exclusion of the majority. During the apartheid era the country was run along pieces of legislation and policies based on racial segregation. The system held a tight clamp on interracial mixing which forced people to remain in segregated areas. For example, a citizen from the Indian community could not own property in an area allocated to white people. In this way, a great majority of people were denied basic human rights. Now, there is a supreme constitution with an entrenched justiciable Bill of Rights founded on democratic values of social justice, human dignity, equality and the advancement of human rights and freedoms. The Constitution also enshrines the values of non-racialism and non-sexism. The Constitutional Court has the power to declare unconstitutional any law or conduct considered to be inconsistent with it. Now, more than two decades down the line, despite the abolition of apartheid, there is evidence that South Africa still experiences hate crimes which violate the entrenched right of vulnerable groups not to be discriminated against on the basis of race, sexual orientation, gender, national origin, occupation, or disability. To remedy this mischief parliament has responded by drafting the Prevention and Combatting of Hate Crimes and Hate Speech Bill. The Bill has been disseminated for public comment and suggestions. It is intended to combat hate crimes and hate speech based on sheer prejudice. The other purpose of the Bill is to bring South Africa in line with international human rights instruments against racism, racial discrimination, xenophobia and related expressions of intolerance identified in several international instruments. It is against this backdrop that this paper intends to analyse the impact of the Bill on the rights to human dignity, equality, and freedom. This study is significant because the Bill was highly contested and creates a huge debate. This study relies on a qualitative evaluative approach based on desktop and library research. The article recurs to primary and secondary sources. For comparative purpose, the paper compares South Africa with countries such as Australia, Canada, Kenya, Cuba, and United Kingdom which have criminalised hate crimes and hate speech. The finding from this study is that despite the Bill’s expressed positive intentions, this draft legislation is problematic for several reasons. The main reason is that it generates considerable controversy mostly because it is considered to infringe the right to freedom of expression. Though the author suggests that the Bill should not be rejected in its entirety, she notes the brutal psychological effect of hate crimes on their direct victims and the writer emphasises that a legislature can succeed to combat hate-crimes only if it provides for them as a separate stand-alone category of offences. In view of these findings, the study recommended that since hate speech clauses have a negative impact on freedom of expression it can be promulgated, subject to the legislature enacting the Prevention and Combatting of Hate-Crimes Bill as a stand-alone law which criminalises hate crimes.

Keywords: freedom of expression, hate crimes, hate speech, human dignity

Procedia PDF Downloads 144
449 Blister Formation Mechanisms in Hot Rolling

Authors: Rebecca Dewfall, Mark Coleman, Vladimir Basabe

Abstract:

Oxide scale growth is an inevitable byproduct of the high temperature processing of steel. Blister is a phenomenon that occurs due to oxide growth, where high temperatures result in the swelling of surface scale, producing a bubble-like feature. Blisters can subsequently become embedded in the steel substrate during hot rolling in the finishing mill. This rolled in scale defect causes havoc within industry, not only with wear on machinery but loss of customer satisfaction, poor surface finish, loss of material, and profit. Even though blister is a highly prevalent issue, there is still much that is not known or understood. The classic iron oxidation system is a complex multiphase system formed of wustite, magnetite, and hematite, producing multi-layered scales. Each phase will have independent properties such as thermal coefficients, growth rate, and mechanical properties, etc. Furthermore, each additional alloying element will have different affinities for oxygen and different mobilities in the oxide phases so that oxide morphologies are specific to alloy chemistry. Therefore, blister regimes can be unique to each steel grade resulting in a diverse range of formation mechanisms. Laboratory conditions were selected to simulate industrial hot rolling with temperature ranges approximate to the formation of secondary and tertiary scales in the finishing mills. Samples with composition: 0.15Wt% C, 0.1Wt% Si, 0.86Wt% Mn, 0.036Wt% Al, and 0.028Wt% Cr, were oxidised in a thermo-gravimetric analyser (TGA), with an air velocity of 10litresmin-1, at temperaturesof 800°C, 850°C, 900°C, 1000°C, 1100°C, and 1200°C respectively. Samples were held at temperature in an argon atmosphere for 10minutes, then oxidised in air for 600s, 60s, 30s, 15s, and 4s, respectively. Oxide morphology and Blisters were characterised using EBSD, WDX, nanoindentation, FIB, and FEG-SEM imaging. Blister was found to have both a nucleation and growth process. During nucleation, the scale detaches from the substrate and blisters after a very short period, roughly 10s. The steel substrate is then exposed inside of the blister and further oxidised in the reducing atmosphere of the blister, however, the atmosphere within the blister is highly dependent upon the porosity of the blister crown. The blister crown was found to be consistently between 35-40um for all heating regimes, which supports the theory that the blister inflates, and the oxide then subsequently grows underneath. Upon heating, two modes of blistering were identified. In Mode 1 it was ascertained that the stresses produced by oxide growth will increase with increasing oxide thickness. Therefore, in Mode 1 the incubation time for blister formation is shortened by increasing temperature. In Mode 2 increase in temperature will result in oxide with a high ductility and high oxide porosity. The high oxide ductility and/or porosity accommodates for the intrinsic stresses from oxide growth. Thus Mode 2 is the inverse of Mode 1, and incubation time is increased with temperature. A new phenomenon was reported whereby blister formed exclusively through cooling at elevated temperatures above mode 2.

Keywords: FEG-SEM, nucleation, oxide morphology, surface defect

Procedia PDF Downloads 117
448 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 293
447 Using Group Concept Mapping to Identify a Pharmacy-Based Trigger Tool to Detect Adverse Drug Events

Authors: Rodchares Hanrinth, Theerapong Srisil, Peeraya Sriphong, Pawich Paktipat

Abstract:

The trigger tool is the low-cost, low-tech method to detect adverse events through clues called triggers. The Institute for Healthcare Improvement (IHI) has developed the Global Trigger Tool for measuring and preventing adverse events. However, this tool is not specific for detecting adverse drug events. The pharmacy-based trigger tool is needed to detect adverse drug events (ADEs). Group concept mapping is an effective method for conceptualizing various ideas from diverse stakeholders. This technique was used to identify a pharmacy-based trigger to detect adverse drug events (ADEs). The aim of this study was to involve the pharmacists in conceptualizing, developing, and prioritizing a feasible trigger tool to detect adverse drug events in a provincial hospital, the northeastern part of Thailand. The study was conducted during the 6-month period between April 1 and September 30, 2017. Study participants involved 20 pharmacists (17 hospital pharmacists and 3 pharmacy lecturers) engaging in three concept mapping workshops. In this meeting, the concept mapping technique created by Trochim, a highly constructed qualitative group technic for idea generating and sharing, was used to produce and construct participants' views on what triggers were potential to detect ADEs. During the workshops, participants (n = 20) were asked to individually rate the feasibility and potentiality of each trigger and to group them into relevant categories to enable multidimensional scaling and hierarchical cluster analysis. The outputs of analysis included the trigger list, cluster list, point map, point rating map, cluster map, and cluster rating map. The three workshops together resulted in 21 different triggers that were structured in a framework forming 5 clusters: drug allergy, drugs induced diseases, dosage adjustment in renal diseases, potassium concerning, and drug overdose. The first cluster is drug allergy such as the doctor’s orders for dexamethasone injection combined with chlorpheniramine injection. Later, the diagnosis of drug-induced hepatitis in a patient taking anti-tuberculosis drugs is one trigger in the ‘drugs induced diseases’ cluster. Then, for the third cluster, the doctor’s orders for enalapril combined with ibuprofen in a patient with chronic kidney disease is the example of a trigger. The doctor’s orders for digoxin in a patient with hypokalemia is a trigger in a cluster. Finally, the doctor’s orders for naloxone with narcotic overdose was classified as a trigger in a cluster. This study generated triggers that are similar to some of IHI Global trigger tool, especially in the medication module such as drug allergy and drug overdose. However, there are some specific aspects of this tool, including drug-induced diseases, dosage adjustment in renal diseases, and potassium concerning which do not contain in any trigger tools. The pharmacy-based trigger tool is suitable for pharmacists in hospitals to detect potential adverse drug events using clues of triggers.

Keywords: adverse drug events, concept mapping, hospital, pharmacy-based trigger tool

Procedia PDF Downloads 130
446 Neuropsychological Aspects in Adolescents Victims of Sexual Violence with Post-Traumatic Stress Disorder

Authors: Fernanda Mary R. G. Da Silva, Adriana C. F. Mozzambani, Marcelo F. Mello

Abstract:

Introduction: Sexual assault against children and adolescents is a public health problem with serious consequences on their quality of life, especially for those who develop post-traumatic stress disorder (PTSD). The broad literature in this research area points to greater losses in verbal learning, explicit memory, speed of information processing, attention and executive functioning in PTSD. Objective: To compare the neuropsychological functions of adolescents from 14 to 17 years of age, victims of sexual violence with PTSD with those of healthy controls. Methodology: Application of a neuropsychological battery composed of the following subtests: WASI vocabulary and matrix reasoning; Digit subtests (WISC-IV); verbal auditory learning test RAVLT; Spatial Span subtest of the WMS - III scale; abbreviated version of the Wisconsin test; concentrated attention test - D2; prospective memory subtest of the NEUPSILIN scale; five-digit test - FDT and the Stroop test (Trenerry version) in adolescents with a history of sexual violence in the previous six months, referred to the Prove (Violence Care and Research Program of the Federal University of São Paulo), for further treatment. Results: The results showed a deficit in the word coding process in the RAVLT test, with impairment in A3 (p = 0.004) and A4 (p = 0.016) measures, which compromises the verbal learning process (p = 0.010) and the verbal recognition memory (p = 0.012), seeming to present a worse performance in the acquisition of verbal information that depends on the support of the attentional system. A worse performance was found in list B (p = 0.047), a lower priming effect p = 0.026, that is, lower evocation index of the initial words presented and less perseveration (p = 0.002), repeated words. Therefore, there seems to be a failure in the creation of strategies that help the mnemonic process of retention of the verbal information necessary for learning. Sustained attention was found to be impaired, with greater loss of setting in the Wisconsin test (p = 0.023), a lower rate of correct responses in stage C of the Stroop test (p = 0.023) and, consequently, a higher index of erroneous responses in C of the Stroop test (p = 0.023), besides more type II errors in the D2 test (p = 0.008). A higher incidence of total errors was observed in the reading stage of the FDT test p = 0.002, which suggests fatigue in the execution of the task. Performance is compromised in executive functions in the cognitive flexibility ability, suggesting a higher index of total errors in the alternating step of the FDT test (p = 0.009), as well as a greater number of persevering errors in the Wisconsin test (p = 0.004). Conclusion: The data from this study suggest that sexual violence and PTSD cause significant impairment in the neuropsychological functions of adolescents, evidencing risk to quality of life in stages that are fundamental for the development of learning and cognition.

Keywords: adolescents, neuropsychological functions, PTSD, sexual violence

Procedia PDF Downloads 109
445 Management of Non-Revenue Municipal Water

Authors: Habib Muhammetoglu, I. Ethem Karadirek, Selami Kara, Ayse Muhammetoglu

Abstract:

The problem of non-revenue water (NRW) from municipal water distribution networks is common in many countries such as Turkey, where the average yearly water losses are around 50% . Water losses can be divided into two major types namely: 1) Real or physical water losses, and 2) Apparent or commercial water losses. Total water losses in Antalya city, Turkey is around 45%. Methods: A research study was conducted to develop appropriate methodologies to reduce NRW. A pilot study area of about 60 thousands inhabitants was chosen to apply the study. The pilot study area has a supervisory control and data acquisition (SCADA) system for the monitoring and control of many water quantity and quality parameters at the groundwater drinking wells, pumping stations, distribution reservoirs, and along the water mains. The pilot study area was divided into 18 District Metered Areas (DMAs) with different number of service connections that ranged between a few connections to less than 3000 connections. The flow rate and water pressure to each DMA were on-line continuously measured by an accurate flow meter and water pressure meter that were connected to the SCADA system. Customer water meters were installed to all billed and unbilled water users. The monthly water consumption as given by the water meters were recorded regularly. Water balance was carried out for each DMA using the well-know standard IWA approach. There were considerable variations in the water losses percentages and the components of the water losses among the DMAs of the pilot study area. Old Class B customer water meters at one DMA were replaced by more accurate new Class C water meters. Hydraulic modelling using the US-EPA EPANET model was carried out in the pilot study area for the prediction of water pressure variations at each DMA. The data sets required to calibrate and verify the hydraulic model were supplied by the SCADA system. It was noticed that a number of the DMAs exhibited high water pressure values. Therefore, pressure reducing valves (PRV) with constant head were installed to reduce the pressure up to a suitable level that was determined by the hydraulic model. On the other hand, the hydraulic model revealed that the water pressure at the other DMAs cannot be reduced when complying with the minimum pressure requirement (3 bars) as stated by the related standards. Results: Physical water losses were reduced considerably as a result of just reducing water pressure. Further physical water losses reduction was achieved by applying acoustic methods. The results of the water balances helped in identifying the DMAs that have considerable physical losses. Many bursts were detected especially in the DMAs that have high physical water losses. The SCADA system was very useful to assess the efficiency level of this method and to check the quality of repairs. Regarding apparent water losses reduction, changing the customer water meters resulted in increasing water revenue by more than 20%. Conclusions: DMA, SCADA, modelling, pressure management, leakage detection and accurate customer water meters are efficient for NRW.

Keywords: NRW, water losses, pressure management, SCADA, apparent water losses, urban water distribution networks

Procedia PDF Downloads 371
444 Highly Selective Phosgene Free Synthesis of Methylphenylcarbamate from Aniline and Dimethyl Carbonate over Heterogeneous Catalyst

Authors: Nayana T. Nivangune, Vivek V. Ranade, Ashutosh A. Kelkar

Abstract:

Organic carbamates are versatile compounds widely employed as pesticides, fungicides, herbicides, dyes, pharmaceuticals, cosmetics and in the synthesis of polyurethanes. Carbamates can be easily transformed into isocyanates by thermal cracking. Isocyantes are used as precursors for manufacturing agrochemicals, adhesives and polyurethane elastomers. Manufacture of polyurethane foams is a major application of aromatic ioscyanates and in 2007 the global consumption of polyurethane was about 12 million metric tons/year and the average annual growth rate was about 5%. Presently Isocyanates/carbamates are manufactured by phosgene based process. However, because of high toxicity of phoegene and formation of waste products in large quantity; there is a need to develop alternative and safer process for the synthesis of isocyanates/carbamates. Recently many alternative processes have been investigated and carbamate synthesis by methoxycarbonylation of aromatic amines using dimethyl carbonate (DMC) as a green reagent has emerged as promising alternative route. In this reaction methanol is formed as a by-product, which can be converted to DMC either by oxidative carbonylation of methanol or by reacting with urea. Thus, the route based on DMC has a potential to provide atom efficient and safer route for the synthesis of carbamates from DMC and amines. Lot of work is being carried out on the development of catalysts for this reaction and homogeneous zinc salts were found to be good catalysts for the reaction. However, catalyst/product separation is challenging with these catalysts. There are few reports on the use of supported Zn catalysts; however, deactivation of the catalyst is the major problem with these catalysts. We wish to report here methoxycarbonylation of aniline to methylphenylcarbamate (MPC) using amino acid complexes of Zn as highly active and selective catalysts. The catalysts were characterized by XRD, IR, solid state NMR and XPS analysis. Methoxycarbonylation of aniline was carried out at 170 °C using 2.5 wt% of the catalyst to achieve >98% conversion of aniline with 97-99% selectivity to MPC as the product. Formation of N-methylated products in small quantity (1-2%) was also observed. Optimization of the reaction conditions was carried out using zinc-proline complex as the catalyst. Selectivity was strongly dependent on the temperature and aniline:DMC ratio used. At lower aniline:DMC ratio and at higher temperature, selectivity to MPC decreased (85-89% respectively) with the formation of N-methylaniline (NMA), N-methyl methylphenylcarbamate (MMPC) and N,N-dimethyl aniline (NNDMA) as by-products. Best results (98% aniline conversion with 99% selectivity to MPC in 4 h) were observed at 170oC and aniline:DMC ratio of 1:20. Catalyst stability was verified by carrying out recycle experiment. Methoxycarbonylation preceded smoothly with various amine derivatives indicating versatility of the catalyst. The catalyst is inexpensive and can be easily prepared from zinc salt and naturally occurring amino acids. The results are important and provide environmentally benign route for MPC synthesis with high activity and selectivity.

Keywords: aniline, heterogeneous catalyst, methoxycarbonylation, methylphenyl carbamate

Procedia PDF Downloads 254
443 Efficient Utilization of Negative Half Wave of Regulator Rectifier Output to Drive Class D LED Headlamp

Authors: Lalit Ahuja, Nancy Das, Yashas Shetty

Abstract:

LED lighting has been increasingly adopted for vehicles in both domestic and foreign automotive markets. Although this miniaturized technology gives the best light output, low energy consumption, and cost-efficient solutions for driving, the same is the need of the hour. In this paper, we present a methodology for driving the highest class two-wheeler headlamp with regulator and rectifier (RR) output. Unlike usual LED headlamps, which are driven by a battery, regulator, and rectifier (RR) driven, a low-cost and highly efficient LED Driver Module (LDM) is proposed. The positive half of magneto output is regulated and used to charge batteries used for various peripherals. While conventionally, the negative half was used for operating bulb-based exterior lamps. But with advancements in LED-based headlamps, which are driven by a battery, this negative half pulse remained unused in most of the vehicles. Our system uses negative half-wave rectified DC output from RR to provide constant light output at all RPMs of the vehicle. With the negative rectified DC output of RR, we have the advantage of pulsating DC input which periodically goes to zero, thus helping us to generate a constant DC output equivalent to the required LED load, and with a change in RPM, additional active thermal bypass circuit help us to maintain the efficiency and thermal rise. The methodology uses the negative half wave output of the RR along with a linear constant current driver with significantly higher efficiency. Although RR output has varied frequency and duty cycles at different engine RPMs, the driver is designed such that it provides constant current to LEDs with minimal ripple. In LED Headlamps, a DC-DC switching regulator is usually used, which is usually bulky. But with linear regulators, we’re eliminating bulky components and improving the form factor. Hence, this is both cost-efficient and compact. Presently, output ripple-free amplitude drivers with fewer components and less complexity are limited to lower-power LED Lamps. The focus of current high-efficiency research is often on high LED power applications. This paper presents a method of driving LED load at both High Beam and Low Beam using the negative half wave rectified pulsating DC from RR with minimum components, maintaining high efficiency within the thermal limitations. Linear regulators are significantly inefficient, with efficiencies typically about 40% and reaching as low as 14%. This leads to poor thermal performance. Although they don’t require complex and bulky circuitry, powering high-power devices is difficult to realise with the same. But with the input being negative half wave rectified pulsating DC, this efficiency can be improved as this helps us to generate constant DC output equivalent to LED load minimising the voltage drop on the linear regulator. Hence, losses are significantly reduced, and efficiency as high as 75% is achieved. With a change in RPM, DC voltage increases, which can be managed by active thermal bypass circuitry, thus resulting in better thermal performance. Hence, the use of bulky and expensive heat sinks can be avoided. Hence, the methodology to utilize the unused negative pulsating DC output of RR to optimize the utilization of RR output power and provide a cost-efficient solution as compared to costly DC-DC drivers.

Keywords: class D LED headlamp, regulator and rectifier, pulsating DC, low cost and highly efficient, LED driver module

Procedia PDF Downloads 41
442 The Impact of Using Flattening Filter-Free Energies on Treatment Efficiency for Prostate SBRT

Authors: T. Al-Alawi, N. Shorbaji, E. Rashaidi, M.Alidrisi

Abstract:

Purpose/Objective(s): The main purpose of this study is to analyze the planning of SBRT treatments for localized prostate cancer with 6FFF and 10FFF energies to see if there is a dosimetric difference between the two energies and how we can increase the plan efficiency and reduce its complexity. Also, to introduce a planning method in our department to treat prostate cancer by utilizing high energy photons without increasing patient toxicity and fulfilled all dosimetric constraints for OAR (an organ at risk). Then toevaluate the target 95% coverage PTV95, V5%, V2%, V1%, low dose volume for OAR (V1Gy, V2Gy, V5Gy), monitor unit (beam-on time), and estimate the values of homogeneity index HI, conformity index CI a Gradient index GI for each treatment plan.Materials/Methods: Two treatment plans were generated for15 patients with localized prostate cancer retrospectively using the CT planning image acquired for radiotherapy purposes. Each plan contains two/three complete arcs with two/three different collimator angle sets. The maximum dose rate available is 1400MU/min for the energy 6FFF and 2400MU/min for 10FFF. So in case, we need to avoid changing the gantry speed during the rotation, we tend to use the third arc in the plan with 6FFF to accommodate the high dose per fraction. The clinical target volume (CTV) consists of the entire prostate for organ-confined disease. The planning target volume (PTV) involves a margin of 5 mm. A 3-mm margin is favored posteriorly. Organs at risk identified and contoured include the rectum, bladder, penile bulb, femoral heads, and small bowel. The prescription dose is to deliver 35Gyin five fractions to the PTV and apply constraints for organ at risk (OAR) derived from those reported in references. Results: In terms of CI=0.99, HI=0.7, and GI= 4.1, it was observed that they are all thesame for both energies 6FFF and 10FFF with no differences, but the total delivered MUs are much less for the 10FFF plans (2907 for 6FFF vs.2468 for 10FFF) and the total delivery time is 124Sc for 6FFF vs. 61Sc for 10FFF beams. There were no dosimetric differences between 6FFF and 10FFF in terms of PTV coverage and mean doses; the mean doses for the bladder, rectum, femoral heads, penile bulb, and small bowel were collected, and they were in favor of the 10FFF. Also, we got lower V1Gy, V2Gy, and V5Gy doses for all OAR with 10FFF plans. Integral dosesID in (Gy. L) were recorded for all OAR, and they were lower with the 10FFF plans. Conclusion: High energy 10FFF has lower treatment time and lower delivered MUs; also, 10FFF showed lower integral and meant doses to organs at risk. In this study, we suggest usinga 10FFF beam for SBRTprostate treatment, which has the advantage of lowering the treatment time and that lead to lessplan complexity with respect to 6FFF beams.

Keywords: FFF beam, SBRT prostate, VMAT, prostate cancer

Procedia PDF Downloads 58
441 Recrystallization Behavior and Microstructural Evolution of Nickel Base Superalloy AD730 Billet during Hot Forging at Subsolvus Temperatures

Authors: Marcos Perez, Christian Dumont, Olivier Nodin, Sebastien Nouveau

Abstract:

Nickel superalloys are used to manufacture high-temperature rotary engine parts such as high-pressure disks in gas turbine engines. High strength at high operating temperatures is required due to the levels of stress and heat the disk must withstand. Therefore it is necessary parts made from materials that can maintain mechanical strength at high temperatures whilst remain comparatively low in cost. A manufacturing process referred to as the triple melt process has made the production of cast and wrought (C&W) nickel superalloys possible. This means that the balance of cost and performance at high temperature may be optimized. AD730TM is a newly developed Ni-based superalloy for turbine disk applications, with reported superior service properties around 700°C when compared to Inconel 718 and several other alloys. The cast ingot is converted into billet during either cogging process or open die forging. The semi-finished billet is then further processed into its final geometry by forging, heat treating, and machining. Conventional ingot-to-billet conversion is an expensive and complex operation, requiring a significant amount of steps to break up the coarse as-cast structure and interdendritic regions. Due to the size of conventional ingots, it is difficult to achieve a uniformly high level of strain for recrystallization, resulting in non-recrystallized regions that retain large unrecrystallized grains. Non-uniform grain distributions will also affect the ultrasonic inspectability response, which is used to find defects in the final component. The main aim is to analyze the recrystallization behavior and microstructural evolution of AD730 at subsolvus temperatures from a semi-finished product (billet) under conditions representative of both cogging and hot forging operations. Special attention to the presence of large unrecrystallized grains was paid. Double truncated cones (DTCs) were hot forged at subsolvus temperatures in hydraulic press, followed by air cooling. SEM and EBSD analysis were conducted in the as-received (billet) and the as-forged conditions. AD730 from billet alloy presents a complex microstructure characterized by a mixture of several constituents. Large unrecrystallized grains present a substructure characterized by large misorientation gradients with the formation of medium to high angle boundaries in their interior, especially close to the grain boundaries, denoting inhomogeneous strain distribution. A fine distribution of intragranular precipitates was found in their interior, playing a key role on strain distribution and subsequent recrystallization behaviour during hot forging. Continuous dynamic recrystallization (CDRX) mechanism was found to be operating in the large unrecrystallized grains, promoting the formation intragranular DRX grains and the gradual recrystallization of these grains. Evidences that hetero-epitaxial recrystallization mechanism is operating in AD730 billet material were found. Coherent γ-shells around primary γ’ precipitates were found. However, no significant contribution to the overall recrystallization during hot forging was found. By contrast, strain presents the strongest effect on the microstructural evolution of AD730, increasing the recrystallization fraction and refining the structure. Regions with low level of deformation (ε ≤ 0.6) were translated into large fractions of unrecrystallized structures (strain accumulation). The presence of undissolved secondary γ’ precipitates (pinning effect), prior to hot forging operations, could explain these results.

Keywords: AD730 alloy, continuous dynamic recrystallization, hot forging, γ’ precipitates

Procedia PDF Downloads 176
440 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure

Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek

Abstract:

Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.

Keywords: health system outcomes, health reforms, health system assessment, health system evaluation

Procedia PDF Downloads 266
439 Leadership Education for Law Enforcement Mid-Level Managers: The Mediating Role of Effectiveness of Training on Transformational and Authentic Leadership Traits

Authors: Kevin Baxter, Ron Grove, James Pitney, John Harrison, Ozlem Gumus

Abstract:

The purpose of this research is to determine the mediating effect of effectiveness of the training provided by Northwestern University’s School of Police Staff and Command (SPSC), on the ability of law enforcement mid-level managers to learn transformational and authentic leadership traits. This study will also evaluate the leadership styles, of course, graduates compared to non-attendees using a static group comparison design. The Louisiana State Police pay approximately $40,000 in salary, tuition, housing, and meals for each state police lieutenant attending the 10-week program of the SPSC. This school lists the development of transformational leaders as an increasing element. Additionally, the SPSC curriculum addresses all four components of authentic leadership - self-awareness, transparency, ethical/moral, and balanced processing. Upon return to law enforcement in roles of mid-level management, there are questions as to whether or not students revert to an “autocratic” leadership style. Insufficient evidence exists to support claims for the effectiveness of management training or leadership development. Though it is widely recognized that transformational styles are beneficial to law enforcement, there is little evidence that suggests police leadership styles are changing. Police organizations continue to hold to a more transactional style (i.e., most senior police leaders remain autocrats). Additionally, research in the application of transformational, transactional, and laissez-faire leadership related to police organizations is minimal. The population of the study is law enforcement mid-level managers from various states within the United States who completed leadership training presented by the SPSC. The sample will be composed of 66 active law enforcement mid-level managers (lieutenants and captains) who have graduated from SPSC and 65 active law enforcement mid-level managers (lieutenants and captains) who have not attended SPSC. Participants will answer demographics questions, Multifactor Leadership Questionnaire, Authentic Leadership Questionnaire, and the Kirkpatrick Hybrid Evaluation Survey. Analysis from descriptive statistics, group comparison, one-way MANCOVA, and the Kirkpatrick Evaluation Model survey will be used to determine training effectiveness in the four levels of reaction, learning, behavior, and results. Independent variables are SPSC graduates (two groups: upper and lower) and no-SPSC attendees, and dependent variables are transformational and authentic leadership scores. SPSC graduates are expected to have higher MLQ scores for transformational leadership traits and higher ALQ scores for authentic leadership traits than SPSC non-attendees. We also expect the graduates to rate the efficacy of SPSC leadership training as high. This study will validate (or invalidate) the benefits, costs, and resources required for leadership development from a nationally recognized police leadership program, and it will also help fill the gap in the literature that exists between law enforcement professional development and transformational and authentic leadership styles.

Keywords: training effectiveness, transformational leadership, authentic leadership, law enforcement mid-level manager

Procedia PDF Downloads 84
438 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.

Keywords: autopoiesis, nanoparticles, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system

Procedia PDF Downloads 98
437 Impact of Traffic Restrictions due to Covid19, on Emissions from Freight Transport in Mexico City

Authors: Oscar Nieto-Garzón, Angélica Lozano

Abstract:

In urban areas, on-road freight transportation creates several social and environmental externalities. Then, it is crucial that freight transport considers not only economic aspects, like retailer distribution cost reduction and service improvement, but also environmental effects such as global CO2 and local emissions (e.g. Particulate Matter, NOX, CO) and noise. Inadequate infrastructure development, high rate of urbanization, the increase of motorization, and the lack of transportation planning are characteristics that urban areas from developing countries share. The Metropolitan Area of Mexico City (MAMC), the Metropolitan Area of São Paulo (MASP), and Bogota are three of the largest urban areas in Latin America where air pollution is often a problem associated with emissions from mobile sources. The effect of the lockdown due to COVID-19 was analyzedfor these urban areas, comparing the same period (January to August) of years 2016 – 2019 with 2020. A strong reduction in the concentration of primary criteria pollutants emitted by road traffic were observed at the beginning of 2020 and after the lockdown measures.Daily mean concentration of NOx decreased 40% in the MAMC, 34% in the MASP, and 62% in Bogota. Daily mean ozone levels increased after the lockdown measures in the three urban areas, 25% in MAMC, 30% in the MASP and 60% in Bogota. These changes in emission patterns from mobile sources drastically changed the ambient atmospheric concentrations of CO and NOX. The CO/NOX ratioat the morning hours is often used as an indicator of mobile sources emissions. In 2020, traffic from cars and light vehicles was significantly reduced due to the first lockdown, but buses and trucks had not restrictions. In theory, it implies a decrease in CO and NOX from cars or light vehicles, maintaining the levels of NOX by trucks(or lower levels due to the congestion reduction). At rush hours, traffic was reduced between 50% and 75%, so trucks could get higher speeds, which would reduce their emissions. By means an emission model, it was found that an increase in the average speed (75%) would reduce the emissions (CO, NOX, and PM) from diesel trucks by up to 30%. It was expected that the value of CO/NOXratio could change due to thelockdownrestrictions. However, although there was asignificant reduction of traffic, CO/NOX kept its trend, decreasing to 8-9 in 2020. Hence, traffic restrictions had no impact on the CO/NOX ratio, although they did reduce vehicle emissions of CO and NOX. Therefore, these emissions may not adequately represent the change in the vehicle emission patterns, or this ratio may not be a good indicator of emissions generated by vehicles. From the comparison of the theoretical data and those observed during the lockdown, results that the real NOX reduction was lower than the theoretical reduction. The reasons could be that there are other sources of NOX emissions, so there would be an over-representation of NOX emissions generated by diesel vehicles, or there is an underestimation of CO emissions. Further analysis needs to consider this ratioto evaluate the emission inventories and then to extend these results forthe determination of emission control policies to non-mobile sources.

Keywords: COVID-19, emissions, freight transport, latin American metropolis

Procedia PDF Downloads 113
436 Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases

Authors: Andrew Cummings, Rhianne Boag, Sarah Bryson, Gordon Turner

Abstract:

Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.

Keywords: aerosol processes, Brownian coagulation, gravitational settling, transport regulations

Procedia PDF Downloads 92
435 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 43
434 Thermosensitive Hydrogel Development for Its Possible Application in Cardiac Cell Therapy

Authors: Lina Paola Orozco Marin, Yuliet Montoya Osorio, John Bustamante Osorno

Abstract:

Ischemic events can culminate in acute myocardial infarction by irreversible cardiac lesions that cannot be restored due to the limited regenerative capacity of the heart. Cell therapy seeks to replace these injured or necrotic cells by transplanting healthy and functional cells. The therapeutic alternatives proposed by tissue engineering and cardiovascular regenerative medicine are the use of biomaterials to mimic the native extracellular medium, which is full of proteins, proteoglycans, and glycoproteins. The selected biomaterials must provide structural support to the encapsulated cells to avoid their migration and death in the host tissue. In this context, the present research work focused on developing a natural thermosensitive hydrogel, its physical and chemical characterization, and the determination of its biocompatibility in vitro. The hydrogel was developed by mixing hydrolyzed bovine and porcine collagen at 2% w/v, chitosan at 2.5% w/v, and beta-glycerolphosphate at 8.5% w/w and 10.5% w/w in magnetic stirring at 4°C. Once obtained, the thermosensitivity and gelation time were determined, incubating the samples at 37°C and evaluating them through the inverted tube method. The morphological characterization of the hydrogels was carried out through scanning electron microscopy. Chemical characterization was carried out employing infrared spectroscopy. The biocompatibility was determined using the MTT cytotoxicity test according to the ISO 10993-5 standard for the hydrogel’s precursors using the fetal human ventricular cardiomyocytes cell line RL-14. The RL-14 cells were also seeded on the top of the hydrogels, and the supernatants were subculture at different periods to their observation under a bright field microscope. Four types of thermosensitive hydrogels were obtained, which differ in their composition and concentration, called A1 (chitosan/bovine collagen/beta-glycerolphosphate 8.5%w/w), A2 (chitosan/porcine collagen/beta-glycerolphosphate 8.5%), B1 (chitosan/bovine collagen/beta-glycerolphosphate 10.5%) and B2 (chitosan/porcine collagen/beta-glycerolphosphate 10.5%). A1 and A2 had a gelation time of 40 minutes, and B1 and B2 had a gelation time of 30 minutes at 37°C. Electron micrographs revealed a three-dimensional internal structure with interconnected pores for the four types of hydrogels. This facilitates the exchange of nutrients, oxygen, and the exit of metabolites, allowing to preserve a microenvironment suitable for cell proliferation. In the infrared spectra, it was possible to observe the interaction that occurs between the amides of polymeric compounds with the phosphate groups of beta-glycerolphosphate. Finally, the biocompatibility tests indicated that cells in contact with the hydrogel or with each of its precursors are not affected in their proliferation capacity for a period of 16 days. These results show the potential of the hydrogel to increase the cell survival rate in the cardiac cell therapies under investigation. Moreover, the results lay the foundations for its characterization and biological evaluation in both in vitro and in vivo models.

Keywords: cardiac cell therapy, cardiac ischemia, natural polymers, thermosensitive hydrogel

Procedia PDF Downloads 164
433 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes

Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert

Abstract:

In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theory

Keywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments

Procedia PDF Downloads 146
432 Correlation of Clinical and Sonographic Findings with Cytohistology for Diagnosis of Ovarian Tumours

Authors: Meenakshi Barsaul Chauhan, Aastha Chauhan, Shilpa Hurmade, Rajeev Sen, Jyotsna Sen, Monika Dalal

Abstract:

Introduction: Ovarian masses are common forms of neoplasm in women and represent 2/3rd of gynaecological malignancies. A pre-operative suggestion of malignancy can guide the gynecologist to refer women with suspected pelvic mass to a gynecological oncologist for appropriate therapy and optimized treatment, which can improve survival. In the younger age group preoperative differentiation into benign or malignant pathology can decide for conservative or radical surgery. Imaging modalities have a definite role in establishing the diagnosis. By using International Ovarian Tumor Analysis (IOTA) classification with sonography, costly radiological methods like Magnetic Resonance Imaging (MRI) / computed tomography (CT) scan can be reduced, especially in developing countries like India. Thus, this study is being undertaken to evaluate the role of clinical methods and sonography for diagnosis of the nature of the ovarian tumor. Material And Methods: This prospective observational study was conducted on 40 patients presenting with ovarian masses, in the Department of Obstetrics and Gynaecology, at a tertiary care center in northern India. Functional cysts were excluded. Ultrasonography and color Doppler were performed on all the cases.IOTA rules were applied, which take into account locularity, size, presence of solid components, acoustic shadow, dopper flow etc . Magnetic Resonance Imaging (MRI) / computed tomography (CT) scans abdomen and pelvis were done in cases where sonography was inconclusive. In inoperable cases, Fine needle aspiration cytology (FNAC) was done. The histopathology report after surgery and cytology report after FNAC was correlated statistically with the pre-operative diagnosis made clinically and sonographically using IOTA rules. Statistical Analysis: Descriptive measures were analyzed by using mean and standard deviation and the Student t-test was applied and the proportion was analyzed by applying the chi-square test. Inferential measures were analyzed by sensitivity, specificity, negative predictive value, and positive predictive value. Results: Provisional diagnosis of the benign tumor was made in 16(42.5%) and of the malignant tumor was made in 24(57.5%) patients on the basis of clinical findings. With IOTA simple rules on sonography, 15(37.5%) were found to be benign, while 23 (57.5%) were found to be malignant and findings were inconclusive in 2 patients (5%). FNAC/Histopathology reported that benign ovarian tumors were 14 (35%) and 26(65%) were malignant, which was taken as the gold standard. The clinical finding alone was found to have a sensitivity of 66.6% and a specificity of 90.9%. USG alone had a sensitivity of 86% and a specificity of 80%. When clinical findings and IOTA simple rules of sonography were combined (excluding inconclusive masses), the sensitivity and specificity were 83.3% and 92.3%, respectively. While including inconclusive masses, sensitivity came out to be 91.6% and specificity was 89.2. Conclusion: IOTA's simple sonography rules are highly sensitive and specific in the prediction of ovarian malignancy and also easy to use and easily reproducible. Thus, combining clinical examination with USG will help in the better management of patients in terms of time, cost and better prognosis. This will also avoid the need for costlier modalities like CT, and MRI.

Keywords: benign, international ovarian tumor analysis classification, malignant, ovarian tumours, sonography

Procedia PDF Downloads 52