Search results for: healthcare costs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3541

Search results for: healthcare costs

571 Frequency of Problem Drinking and Depression in Males with a History of Alcohol Consumption Admitted to a Tertiary Care Setting in Southern Sri Lanka

Authors: N. H. D. P. Fonseka, I. H. Rajapakse, A. S. Dissanayake

Abstract:

Background: Problem drinking, namely alcohol dependence (AD) and alcohol abuse (AA) are associated with major medical, social and economic adverse consequences. Problem drinking behavior is noted among those admitted to hospitals due to alcohol-related medical/surgical complaints as well as those with unrelated complaints. Literature shows an association between alcohol consumption and depression. Aims of this study were to determine the frequency of problem drinking and depression among males with a history of alcohol consumption tertiary care setting in Southern Sri Lanka. Method: Two-hundred male patients who consumed alcohol, receiving care in medical and surgical wards in Teaching Hospital Galle, were assessed. A validated J12 questionnaire of the Mini International Neuropsychiatric Interview was administered to determine frequency AA and AD. A validated PHQ 9 questionnaire to determine the prevalence and severity of depression. Results: Sixty-three participants (31%) had problem drinking. Of them, 61% had AD, and 39% had AA. Depression was noted in 39 (19%) subjects. In those who reported alcohol consumption not amounting to problem drinking, depression was noted in 23 (16%) participants. Mild depression was seen in 17, moderate in five and moderately severe in one. Among those who had problem drinking, 16 (25%) had depression. Mild depression was seen in four, moderate in seven, moderately severe in three and severe in two. Conclusions: A high proportion alcohol users had problem drinking. Adverse consequences associated with problem drinking places a major strain on the health system especially in a low resource setting where healthcare spending is limited and alcohol cessation support services are not well organised. Thus alcohol consumption and problem drinking behaviour need to be inquired into all medical consultations. Community prevalence of depression in Sri Lanka is approximately 10%. Depression among those consuming alcohol was two times higher compared to the general population. The rates of depression among those with problem drinking were especially high being 2.5 times more common than in the general population. A substantial proportion of these patients with depression had moderately severe or severe depression. When depression coexists with problem drinking, it may increase the tendency to consume alcohol as well as act as a barrier to the success of alcohol cessation interventions. Thus screening all patients who consume alcohol for depression, especially those who are problem drinkers becomes an important step in their clinical evaluation. In addition, in view of the high prevalence of problem drinking and coexistent depression, the need to organize a structured alcohol cessation support service in Sri Lanka as well as the need for increasing access to psychological evaluation and treatment of those with problem drinking are highlighted.

Keywords: alcohol abuse, alcohol, depression, problem drinking

Procedia PDF Downloads 142
570 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 124
569 Three Year Pedometer Based Physical Activity Intervention of the Adult Population in Qatar

Authors: Mercia I. Van Der Walt, Suzan Sayegh, Izzeldin E. L. J. Ibrahim, Mohamed G. Al-Kuwari, Manaf Kamil

Abstract:

Background: Increased physical activity is associated with improvements in health conditions. Walking is recognized as an easy form of physical activity and a strategy used in health promotion. Step into Health (SIH), a national community program, was established in Qatar to support physical activity promotion through the monitoring of step counts. This study aims to assess the physical activity levels of the adult population in Qatar through a pedometer-based community program over a three-year-period. Methodology: This cross-sectional longitudinal study was conducted between from January 2013 and December 2015 based on daily step counts. A total of 15,947 adults (8,551 males and 7,396 females), from different nationalities enrolled in the program and aged 18 to 64, are included. The program involves free distribution of pedometers to members who voluntarily choose to register. It is also supported by a self-monitoring online account and linked to a web-database. All members are informed about the 10,000 steps/day target and automated emails as well as text messages are sent as reminders to upload data. Daily step counts were measured through the Omron HJ-324U pedometer (Omron Healthcare Co., Ltd., Japan). Analyses are done on the data extracted from the web-database. Results: Daily average step count for the overall community increased from 4,830 steps/day (2013) to 6,124 steps /day (2015). This increase was also observed within the three age categories (18–30), (31-45) and (>45) years. Average steps per day were found to be more among males compared with females in each of the aforementioned age groups. Moreover, males and females in the age group (>45 years) show the highest average step count with 7,010 steps/day and 5,564 steps/day respectively. The 21% increase in overall step count throughout the study period is associated with well-resourced program and ongoing impact in smaller communities such as workplaces and universities, a step in the right direction. However, the average step count of 6,124 steps/day in the third year is still classified as the low active category. Although the program showed an increase step count we found, 33% of the study population are low active, 35 % are sedentary with only 32% being active. Conclusion: This study indicates that the pedometer-based intervention was effective in increasing the daily physical activity of participants. However, alternative approaches need to be incorporated within the program to educate and encourage the community to meet the physical activity recommendations in relation to step count.

Keywords: pedometer, physical activity, Qatar, step count

Procedia PDF Downloads 224
568 The Relationship between the Skill Mix Model and Patient Mortality: A Systematic Review

Authors: Yi-Fung Lin, Shiow-Ching Shun, Wen-Yu Hu

Abstract:

Background: A skill mix model is regarded as one of the most effective methods of reducing nursing shortages, as well as easing nursing staff workloads and labor costs. Although this model shows several benefits for the health workforce, the relationship between the optimal model of skill mix and the patient mortality rate remains to be discovered. Objectives: This review aimed to explore the relationship between the skill mix model and patient mortality rate in acute care hospitals. Data Sources: A systematic search of the PubMed, Web of Science, Embase, and Cochrane Library databases and researchers retrieved studies published between January 1986 and March 2022. Review methods: Two independent reviewers screened the titles and abstracts based on selection criteria, extracted the data, and performed critical appraisals using the STROBE checklist of each included study. The studies focused on adult patients in acute care hospitals, and the skill mix model and patient mortality rate were included in the analysis. Results: Six included studies were conducted in the USA, Canada, Italy, Taiwan, and European countries (Belgium, England, Finland, Ireland, Spain, and Switzerland), including patients in medical, surgical, and intensive care units. There were both nurses and nursing assistants in their skill mix team. This main finding is that three studies (324,592 participants) show evidence of fewer mortality rates associated with hospitals with a higher percentage of registered nurse staff (range percentage of registered nurse staff 36.1%-100%), but three articles (1,122,270 participants) did not find the same result (range of percentage of registered nurse staff 46%-96%). However, based on appraisal findings, those showing a significant association all meet good quality standards, but only one-third of their counterparts. Conclusions: In light of the limited amount and quality of published research in this review, it is prudent to treat the findings with caution. Although the evidence is not insufficient certainty to draw conclusions about the relationship between nurse staffing level and patients' mortality, this review lights the direction of relevant studies in the future. The limitation of this article is the variation in skill mix models among countries and institutions, making it impossible to do a meta-analysis to compare them further.

Keywords: nurse staffing level, nursing assistants, mortality, skill mix

Procedia PDF Downloads 92
567 Transition from Linear to Circular Business Models with Service Design Methodology

Authors: Minna-Maari Harmaala, Hanna Harilainen

Abstract:

Estimates of the economic value of transitioning to circular economy models vary but it has been estimated to represent $1 trillion worth of new business into the global economy. In Europe alone, estimates claim that adopting circular-economy principles could not only have environmental and social benefits but also generate a net economic benefit of €1.8 trillion by 2030. Proponents of a circular economy argue that it offers a major opportunity to increase resource productivity, decrease resource dependence and waste, and increase employment and growth. A circular system could improve competitiveness and unleash innovation. Yet, most companies are not capturing these opportunities and thus the even abundant circular opportunities remain uncaptured even though they would seem inherently profitable. Service design in broad terms relates to developing an existing or a new service or service concept with emphasis and focus on the customer experience from the onset of the development process. Service design may even mean starting from scratch and co-creating the service concept entirely with the help of customer involvement. Service design methodologies provide a structured way of incorporating customer understanding and involvement in the process of designing better services with better resonance to customer needs. A business model is a depiction of how the company creates, delivers, and captures value; i.e. how it organizes its business. The process of business model development and adjustment or modification is also called business model innovation. Innovating business models has become a part of business strategy. Our hypothesis is that in addition to linear models still being easier to adopt and often with lower threshold costs, companies lack an understanding of how circular models can be adopted into their business and how customers will be willing and ready to adopt the new circular business models. In our research, we use robust service design methodology to develop circular economy solutions with two case study companies. The aim of the process is to not only develop the service concepts and portfolio, but to demonstrate the willingness to adopt circular solutions exists in the customer base. In addition to service design, we employ business model innovation methods to develop, test, and validate the new circular business models further. The results clearly indicate that amongst the customer groups there are specific customer personas that are willing to adopt and in fact are expecting the companies to take a leading role in the transition towards a circular economy. At the same time, there is a group of indifferents, to whom the idea of circularity provides no added value. In addition, the case studies clearly show what changes adoption of circular economy principles brings to the existing business model and how they can be integrated.

Keywords: business model innovation, circular economy, circular economy business models, service design

Procedia PDF Downloads 108
566 An Econometric Analysis of the Flat Tax Revolution

Authors: Wayne Tarrant, Ethan Petersen

Abstract:

The concept of a flat tax goes back to at least the Biblical tithe. A progressive income tax was first vociferously espoused in a small, but famous, pamphlet in 1848 (although England had an emergency progressive tax for war costs prior to this). Within a few years many countries had adopted the progressive structure. The flat tax was only reinstated in some small countries and British protectorates until Mart Laar was elected Prime Minister of Estonia in 1992. Since Estonia’s adoption of the flat tax in 1993, many other formerly Communist countries have likewise abandoned progressive income taxes. Economists had expectations of what would happen when a flat tax was enacted, but very little work has been done on actually measuring the effect. With a testbed of 21 countries in this region that currently have a flat tax, much comparison is possible. Several countries have retained progressive taxes, giving an opportunity for contrast. There are also the cases of Czech Republic and Slovakia, which have adopted and later abandoned the flat tax. Further, with over 20 years’ worth of economic history in some flat tax countries, we can begin to do some serious longitudinal study. In this paper we consider many economic variables to determine if there are statistically significant differences from before to after the adoption of a flat tax. We consider unemployment rates, tax receipts, GDP growth, Gini coefficients, and market data where the data are available. Comparisons are made through the use of event studies and time series methods. The results are mixed, but we draw statistically significant conclusions about some effects. We also look at the different implementations of the flat tax. In some countries there are equal income and corporate tax rates. In others the income tax has a lower rate, while in others the reverse is true. Each of these sends a clear message to individuals and corporations. The policy makers surely have a desired effect in mind. We group countries with similar policies, try to determine if the intended effect actually occurred, and then report the results. This is a work in progress, and we welcome the suggestion of variables to consider. Further, some of the data from before the fall of the Iron Curtain are suspect. Since there are new ruling regimes in these countries, the methods of computing different statistical measures has changed. Although we first look at the raw data as reported, we also attempt to account for these changes. We show which data seem to be fictional and suggest ways to infer the needed statistics from other data. These results are reported beside those on the reported data. Since there is debate about taxation structure, this paper can help inform policymakers of change the flat tax has caused in other countries. The work shows some strengths and weaknesses of a flat tax structure. Moreover, it provides beginnings of a scientific analysis of the flat tax in practice rather than having discussion based solely upon theory and conjecture.

Keywords: flat tax, financial markets, GDP, unemployment rate, Gini coefficient

Procedia PDF Downloads 320
565 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 134
564 The Significance of Picture Mining in the Fashion and Design as a New Research Method

Authors: Katsue Edo, Yu Hiroi

Abstract:

T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.

Keywords: empirical research, fashion and design, Picture Mining, qualitative research

Procedia PDF Downloads 342
563 Liability of AI in Workplace: A Comparative Approach Between Shari’ah and Common Law

Authors: Barakat Adebisi Raji

Abstract:

In the workplace, Artificial Intelligence has, in recent years, emerged as a transformative technology that revolutionizes how organizations operate and perform tasks. It is a technology that has a significant impact on transportation, manufacturing, education, cyber security, robotics, agriculture, healthcare, and so many other organizations. By harnessing AI technology, workplaces can enhance productivity, streamline processes, and make more informed decisions. Given the potential of AI to change the way we work and its impact on the labor market in years to come, employers understand that it entails legal challenges and risks despite the advantages inherent in it. Therefore, as AI continues to integrate into various aspects of the workplace, understanding the legal and ethical implications becomes paramount. Also central to this study is the question of who is held liable where AI makes any defaults; the person (company) who created the AI, the person who programmed the AI algorithm or the person who uses the AI? Thus, the aim of this paper is to provide a detailed overview of how AI-related liabilities are addressed under each legal tradition and shed light on potential areas of accord and divergence between the two legal cultures. The objectives of this paper are to (i) examine the ability of Common law and Islamic law to accommodate the issues and damage caused by AI in the workplace and the legality of compensation for such injury sustained; (ii) to discuss the extent to which AI can be described as a legal personality to bear responsibility: (iii) examine the similarities and disparities between Common Law and Islamic Jurisprudence on the liability of AI in the workplace. The methodology adopted in this work was qualitative, and the method was purely a doctrinal research method where information is gathered from the primary and secondary sources of law, such as comprehensive materials found in journal articles, expert-authored books and online news sources. Comparative legal method was also used to juxtapose the approach of Islam and Common Law. The paper concludes that since AI, in its current legal state, is not recognized as a legal entity, operators or manufacturers of AI should be held liable for any damage that arises, and the determination of who bears the responsibility should be dependent on the circumstances surrounding each scenario. The study recommends the granting of legal personality to AI systems, the establishment of legal rights and liabilities for AI, the establishment of a holistic Islamic virtue-based AI ethics framework, and the consideration of Islamic ethics.

Keywords: AI, health- care, agriculture, cyber security, common law, Shari'ah

Procedia PDF Downloads 16
562 Environmental Related Mortality Rates through Artificial Intelligence Tools

Authors: Stamatis Zoras, Vasilis Evagelopoulos, Theodoros Staurakas

Abstract:

The association between elevated air pollution levels and extreme climate conditions (temperature, particulate matter, ozone levels, etc.) and mental consequences has been, recently, the focus of significant number of studies. It varies depending on the time of the year it occurs either during the hot period or cold periods but, specifically, when extreme air pollution and weather events are observed, e.g. air pollution episodes and persistent heatwaves. It also varies spatially due to different effects of air quality and climate extremes to human health when considering metropolitan or rural areas. An air pollutant concentration and a climate extreme are taking a different form of impact if the focus area is countryside or in the urban environment. In the built environment the climate extreme effects are driven through the formed microclimate which must be studied more efficiently. Variables such as biological, age groups etc may be implicated by different environmental factors such as increased air pollution/noise levels and overheating of buildings in comparison to rural areas. Gridded air quality and climate variables derived from the land surface observations network of West Macedonia in Greece will be analysed against mortality data in a spatial format in the region of West Macedonia. Artificial intelligence (AI) tools will be used for data correction and prediction of health deterioration with climatic conditions and air pollution at local scale. This would reveal the built environment implications against the countryside. The air pollution and climatic data have been collected from meteorological stations and span the period from 2000 to 2009. These will be projected against the mortality rates data in daily, monthly, seasonal and annual grids. The grids will be operated as AI-based warning models for decision makers in order to map the health conditions in rural and urban areas to ensure improved awareness of the healthcare system by taken into account the predicted changing climate conditions. Gridded data of climate conditions, air quality levels against mortality rates will be presented by AI-analysed gridded indicators of the implicated variables. An Al-based gridded warning platform at local scales is then developed for future system awareness platform for regional level.

Keywords: air quality, artificial inteligence, climatic conditions, mortality

Procedia PDF Downloads 88
561 Legal Initiatives for Afghan Humanitarian Crisis

Authors: Fereshteh Ganjavi, Rachel Schaffer, Varsha Jorawar

Abstract:

Elena’s Light is a non-profit organization focused on building brighter futures for refugees, especially women and children. Our mission is to empower refugee women and children by addressing social, legal, and public health issues that predominantly concern them. Elena’s Light offers a range of services that support refugees from structural disadvantages, cultural and social stress, marginalization, and other stressors related to migration. Using a three-pronged approach, our programs focus on legal advocacy, English language acquisition, and health and wellness. Following the Afghan humanitarian crisis, Elena’s Light has developed and intensified advocacy efforts in the legal realm to address the influx of refugees who desperately need assistance. We developed and hosted a Know Your Rights presentation with local immigration lawyers and professionals in February 2022 on the Afghan Humanitarian Parole, which was very successful with over 100 attendees. Elena’s Light is hosting the second Know Your Rights session in early August 2022 on immigration options for Afghans, including Temporary Protected Status (TPS), asylum, Special Immigrant Visa (SIV), and humanitarian parole. Lastly, EL is also leading the local initiative to develop a pro-bono committee to respond to the overwhelming need for lawyers to work on legal cases for Afghan during this crisis. Furthermore, through our other services, we provide free, in-home customizable ESL tutoring sessions to refugee women with a focus on driver’s education, facilitating acculturation, and improving employment opportunities. We also provide in-home maternal, pediatric, and mental health education and wellness services that are aimed at addressing the explicit and implicit barriers to healthcare for refugee populations. Elena’s Light’s diverse community aims to counter the structural disadvantages and anxiety-inducing emotions and experiences related to being a refugee. We would like to join this International Conference on Refugee Law since protecting refugee rights is our mission. We would like to share what we have learned from our legal initiatives for refugee rights. We would also like to listen, learn from, and discuss with experts and researchers how to better understand and advocate for refugee rights. We hope to improve our understanding of how to provide better legal aid for our clients through this conference.

Keywords: legal, advocacy, Afghan humanitarian crisis, policy, pro-bono

Procedia PDF Downloads 106
560 An Exploratory Study on the Level of Awareness and Common Barriers of Physicians on Overweight and Obesity Management in Bangladesh

Authors: Kamrun Nahar Koly, Saimul Islam

Abstract:

Overweight and obesity is increasing at an alarming rate and a leading risk factor for morbidity throughout the world. In a country like Bangladesh where under nutrition and overweight both co-exist at the same time, but this issue has been underexplored as expected. The aim of the present study was to assess the knowledge, attitudes and identify the barriers of the physicians regarding overweight and obesity management on an urban hospital of Dhaka city in Bangladesh. A simple cross sectional study was conducted at two selected government and two private hospital to assess the knowledge, attitude and common barriers regarding overweight and obesity management among healthcare professionals. One hundred and fifty five physicians were surveyed. A standard questionnaire was constructed in local language and interview was administrated. Among the 155 physicians, majority 53 (34.20%) were working on SMC, 36 (23.20%) from DMC, 33 (21.30%) were based on SSMC and the rest 33 (21.30%) were from HFRCMH. Mean age of the study physicians were 31.88±5.92. Majority of the physicians 80 (51.60%) were not able to answer the correct prevalence of obesity but also a substantial number of them 75(48.40%) could mark the right answer. Among the physicians 150 (96.77%) reported BMI as a diagnostic index for overweight and obesity, where as 43 (27.74%) waist circumference, 30 (19.35%) waist hip ratio and 26 (16.77%) marked mid-arm circumference. A substantial proportion 71 (46.70%) of the physicians thought that they do not have much to do controlling weight problem in Bangladesh context though it has been opposed by 42 (27.60%) of the physicians and 39(25.70%) was neutral to comment. The majority of them 147 (96.1%) thought that a family based education program would be beneficial followed by 145 (94.8%) physicians mentioned about raising awareness among mothers as she is the primary caregiver. The idea of a school based education program will also help to early intervene referred by 142 (92.8%) of the physicians. Community based education program was also appreciated by 136 (89.5%) of the physicians. About 74 (47.7%) of them think that the patients still lack in motivation to maintain their weight properly at the same time too many patients to deal with can be a barrier as well assumed by 73 (47.1%) of them. Lack of national policy or management guideline can act as an obstacle told by 60 (38.7%) of the physicians. The relationship of practicing as a part of the general examination and chronic disease management was statistically significant (p<0.05) with physician occupational status. As besides, perceived barriers like lack of parents support, lack of a national policy was statistically significant (p<0.05) with physician occupational status. For the young physician, more training programme will be needed to transform their knowledge and attitude into practice. However, several important barriers interface for the physician treatment efforts and need to address.

Keywords: obesity management, physician, awareness, barriers, Bangladesh

Procedia PDF Downloads 149
559 Understanding the Benefits of Multiple-Use Water Systems (MUS) for Smallholder Farmers in the Rural Hills of Nepal

Authors: RAJ KUMAR G.C.

Abstract:

There are tremendous opportunities to maximize smallholder farmers’ income from small-scale water resource development through micro irrigation and multiple-use water systems (MUS). MUS are an improved water management approach, developed and tested successfully by iDE that pipes water to a community both for domestic use and for agriculture using efficient micro irrigation. Different MUS models address different landscape constraints, water demand, and users’ preferences. MUS are complemented by micro irrigation kits, which were developed by iDE to enable farmers to grow high-value crops year-round and to use limited water resources efficiently. Over the last 15 years, iDE’s promotion of the MUS approach has encouraged government and other key stakeholders to invest in MUS for better planning of scarce water resources. Currently, about 60% of the cost of MUS construction is covered by the government and community. Based on iDE’s experience, a gravity-fed MUS costs approximately $125 USD per household to construct, and it can increase household income by $300 USD per year. A key element of the MUS approach is keeping farmers well linked to input supply systems and local produce collection centers, which helps to ensure that the farmers can produce a sufficient quantity of high-quality produce that earns a fair price. This process in turn creates an enabling environment for smallholders to invest in MUS and micro irrigation. Therefore, MUS should be seen as an integrated package of interventions –the end users, water sources, technologies, and the marketplace– that together enhance technical, financial, and institutional sustainability. Communities are trained to participate in sustainable water resource management as a part of the MUS planning and construction process. The MUS approach is cost-effective, improves community governance of scarce water resources, helps smallholder farmers to improve rural health and livelihoods, and promotes gender equity. MUS systems are simple to maintain and communities are trained to ensure that they can undertake minor maintenance procedures themselves. All in all, the iDE Nepal MUS offers multiple benefits and represents a practical and sustainable model of the MUS approach. Moreover, there is a growing national consensus that rural water supply systems should be designed for multiple uses, acknowledging that substantial work remains in developing national-level and local capacity and policies for scale-up.

Keywords: multiple-use water systems , small scale water resources, rural livelihoods, practical and sustainable model

Procedia PDF Downloads 272
558 Impact of Rapid Urbanization on Health Sector in India

Authors: Madhvi Bhayani

Abstract:

Introduction: Due to the rapid pace of urbanization, the urban health issues have become one of the significant threats to future development in India. It also poses serious repercussions on the citizen’s health. As urbanization in India is increasing at an unprecedented rate and it has generated the urban health crisis among the city dwellers especially the urban poor. The increasing proportion of the urban poor and vulnerable to the health indicators worse than the rural counterparts, they face social and financial barriers in accessing healthcare services and these conditions make human health at risk. The Local as well as the State and National governments are alike tackling with the challenges of urbanization as it has become very essential for the government to provide the basic necessities and better infrastructure that make life in cities safe and healthy. Thus, the paper argues that if no major realistic steps are taken with immediate effect, the citizens will face a huge burden of health hazards. Aim: This paper attempts to analyze the current infrastructure, government planning, and its future policy, it also discusses the challenges and outcomes of urbanization on health and its impact on it and it will also predict the future trend with regard to disease burden in the urban areas. Methods: The paper analyzes on the basis of the secondary data by taking into consideration the connection between the Rapid Urbanization and Public Health Challenges, health and health care system and its services delivery to the citizens especially to the urban poor. Extensive analyses of government census reports, health information and policy, the government health-related schemes, urban development and based on the past trends, the future status of urban infrastructure and health outcomes are predicted. The social-economic and political dimensions are also taken into consideration from regional, national and global perspectives, which are incorporated in the paper to make realistic predictions for the future. Findings and Conclusion: The findings of the paper show that India suffers a lot due to the double burden of rapidly increasing in diseases and also growing health inequalities and disparities in health outcomes. Existing tools of governance of urban health are falling short to provide the better health care services. They need to strengthen the collaboration and communication among the state, national and local governments and also with the non-governmental partners. Based on the findings the policy implications are then described and areas for future research are defined.

Keywords: health care, urbanization, urban health, service delivery

Procedia PDF Downloads 182
557 Improving Engagement: Dental Veneers, a Qualitative Analysis of Posts on Instagram

Authors: Matthew Sedgwick

Abstract:

Introduction: Social media continues to grow in popularity and Instagram is one of the largest platforms available. It provides an invaluable method of communication between health care professionals and patients. Both patients and dentists can benefit from seeing clinical cases posted by other members of the profession. It can prompt discussion about how the outcome was achieved and showcases what is possible with the right techniques and planning. This study aimed to identify what people were posting about the topic ‘veneers’ and inform health care professionals as to what content had the most engagement and make recommendations as to how to improve the quality of social media posts. Design: 150 consecutive posts for the search term ‘veneers’ were analyzed retrospectively between 21st October 2021 to 31st October 2021. Non-English language posts duplicated posts, and posts not about dental veneers were excluded. After exclusions were applied, 80 posts were included in the study for analysis. The content of the posts was analyzed and coded and the main themes were identified. The number of comments, likes and views were also recorded for each post. Results: The themes were: before and after treatment, cost, dental training courses, treatment process and trial smiles. Dentists were the most common posters of content (82.5%) and it was interesting to note that there were no patients who posted about treatment in this sample. The main type of media was photographs (93.75%) compared to video (6.25%). Videos had an average of 45,541 views and more comments and likes than the average for photographs. The average number of comments and likes per post were 20.88 and 761.58, respectively. Conclusion: Before and after photographs were the most common finding as this is how dentists showcase their work. The study showed that videos showing the treatment process had more engagement than photographs. Dentists should consider making video posts showing the patient journey, including before and after veneer treatment, as this can result in more potential patients and colleagues viewing the content. Video content could help dentists distinguish their posts from others as it can also be used across other platforms such as TikTok or Facebook reaching a wider audience. More informative posts about how the result has shown are achieved required, including potential costs. This will help increase transparency regarding this treatment method, including the financial and potential biological cost to teeth. As a result, this will improve patient understanding and become an invaluable adjunct in informed consent.

Keywords: content analysis, dental veneers, Instagram, social media

Procedia PDF Downloads 120
556 An Examination of Earnings Management by Publicly Listed Targets Ahead of Mergers and Acquisitions

Authors: T. Elrazaz

Abstract:

This paper examines accrual and real earnings management by publicly listed targets around mergers and acquisitions. Prior literature shows that earnings management around mergers and acquisitions can have a significant economic impact because of the associated wealth transfers among stakeholders. More importantly, acting on behalf of their shareholders or pursuing their self-interests, managers of both targets and acquirers may be equally motivated to manipulate earnings prior to an acquisition to generate higher gains for their shareholders or themselves. Building on the grounds of information asymmetry, agency conflicts, stewardship theory, and the revelation principle, this study addresses the question of whether takeover targets employ accrual and real earnings management in the periods prior to the announcement of Mergers and Acquisitions (M&A). Additionally, this study examines whether acquirers are able to detect targets’ earnings management, and in response, adjust the acquisition premium paid in order not to face the risk of overpayment. This study uses an aggregate accruals approach in estimating accrual earnings management as proxied by estimated abnormal accruals. Additionally, real earnings management is proxied for by employing widely used models in accounting and finance literature. The results of this study indicate that takeover targets manipulate their earnings using accruals in the second year with an earnings release prior to the announcement of the M&A. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals. These results are consistent with the argument that targets of cash-only or mixed-payment deals do not have the same strong motivations to manage their earnings as their stock-financed deals counterparts do additionally supporting the findings of prior studies that the method of payment in takeovers is value relevant. The findings of this study also indicate that takeover targets manipulate earnings upwards through cutting discretionary expenses the year prior to the acquisition while they do not do so by manipulating sales or production costs. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals, providing further robustness to the results derived under the accrual-based models. Finally, this study finds evidence suggesting that acquirers are fully aware of the accrual-based techniques employed by takeover targets and can unveil such manipulation practices. These results are robust to alternative accrual and real earnings management proxies, as well as controlling for the method of payment in the deal.

Keywords: accrual earnings management, acquisition premium, real earnings management, takeover targets

Procedia PDF Downloads 96
555 Apatite Flotation Using Fruits' Oil as Collector and Sorghum as Depressant

Authors: Elenice Maria Schons Silva, Andre Carlos Silva

Abstract:

The crescent demand for raw material has increased mining activities. Mineral industry faces the challenge of process more complexes ores, with very small particles and low grade, together with constant pressure to reduce production costs and environment impacts. Froth flotation deserves special attention among the concentration methods for mineral processing. Besides its great selectivity for different minerals, flotation is a high efficient method to process fine particles. The process is based on the minerals surficial physicochemical properties and the separation is only possible with the aid of chemicals such as collectors, frothers, modifiers, and depressants. In order to use sustainable and eco-friendly reagents, oils extracted from three different vegetable species (pequi’s pulp, macauba’s nut and pulp, and Jatropha curcas) were studied and tested as apatite collectors. Since the oils are not soluble in water, an alkaline hydrolysis (or saponification), was necessary before their contact with the minerals. The saponification was performed at room temperature. The tests with the new collectors were carried out at pH 9 and Flotigam 5806, a synthetic mix of fatty acids industrially adopted as apatite collector manufactured by Clariant, was used as benchmark. In order to find a feasible replacement for cornstarch the flour and starch of a graniferous variety of sorghum was tested as depressant. Apatite samples were used in the flotation tests. XRF (X-ray fluorescence), XRD (X-ray diffraction), and SEM/EDS (Scanning Electron Microscopy with Energy Dispersive Spectroscopy) were used to characterize the apatite samples. Zeta potential measurements were performed in the pH range from 3.5 to 12.5. A commercial cornstarch was used as depressant benchmark. Four depressants dosages and pH values were tested. A statistical test was used to verify the pH, dosage, and starch type influence on the minerals recoveries. For dosages equal or higher than 7.5 mg/L, pequi oil recovered almost all apatite particles. In one hand, macauba’s pulp oil showed excellent results for all dosages, with more than 90% of apatite recovery, but in the other hand, with the nut oil, the higher recovery found was around 84%. Jatropha curcas oil was the second best oil tested and more than 90% of the apatite particles were recovered for the dosage of 7.5 mg/L. Regarding the depressant, the lower apatite recovery with sorghum starch were found for a dosage of 1,200 g/t and pH 11, resulting in a recovery of 1.99%. The apatite recovery for the same conditions as 1.40% for sorghum flour (approximately 30% lower). When comparing with cornstarch at the same conditions sorghum flour produced an apatite recovery 91% lower.

Keywords: collectors, depressants, flotation, mineral processing

Procedia PDF Downloads 129
554 The Impact of Childhood Cancer on Young Adult Survivors: A Life Course Perspective

Authors: Bridgette Merriman, Wen Fan

Abstract:

Background: Existing cancer survivorship literature explores varying physical, psychosocial, and psychological late effects experienced by survivors of childhood cancer. However, adolescent and young adult (AYA) survivors of childhood cancer are understudied compared to their adult and pediatric cancer counterparts. Furthermore, existing quality of life (QoL) research fails to account for how cancer survivorship affects survivors across the lifespan. Given that prior research suggests positive cognitive appraisals of adverse events - such as cancer - mitigate detrimental psychosocial symptomologies later in life; it is crucial to understand cancer’s impacts on AYA survivors of childhood malignancies across the life course in order to best support these individuals and prevent maladaptive psychosocial outcomes. Methods: This qualitative study adopted the life-course perspective to investigate the experiences of AYA survivors of childhood malignancies. Eligible patients included AYA 21-30 years old who were diagnosed with cancer <18 years old and off active treatment for >2 years. Participants were recruited through social media posts. Study fulfillment included taking part in one semi-structured video interview to explore areas of survivorship previously identified as being specific to AYA survivors. Interviews were transcribed, coded, and analyzed in accordance with narrative analysis and life-course theory. This study was approved by the Boston College Institutional Review Board. Results: Of 28 individuals who met inclusion criteria and expressed interest in the study, nineteen participants (12 women, 7 men, mean age 25.4 years old) completed the study. Life course theory analysis revealed that events relating to childhood cancer are interconnected throughout the life course rather than isolated events. This “trail of survivorship” includes age at diagnosis, transitioning to life after cancer, and relationships with other childhood survivors. Despite variability in objective characteristics surrounding these events, participants recalled positive experiences regarding at least one checkpoint, ultimately finding positive meaning from their cancer experience. Conclusions: These findings suggest that favorable subjective experiences at these checkpoints are critical in fostering positive conceptions of childhood malignancy for AYA survivors of childhood cancer. Ultimately, healthcare professionals and communities may use these findings to guide support resources and interventions for childhood cancer patients and AYA survivors, therein minimizing detrimental psychosocial effects and maximizing resiliency.

Keywords: medical sociology, pediatric oncology, survivorship, qualitative, life course perspective

Procedia PDF Downloads 49
553 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.

Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism

Procedia PDF Downloads 257
552 Care at the Intersection of Biomedicine and Traditional Chinese Medicine: Narratives of Integration, Negotiation, and Provision

Authors: Jessica Ding

Abstract:

The field of global health is currently advocating for a resurgence in the use of traditional medicines to improve people-centered care. Healthcare policies are rapidly changing in response; in China, the increasing presence of TCM in the same spaces as biomedicine has led to a new term: integrative medicine. However, the existence of TCM as a part of integrative medicine creates a pressing paradoxical tension where TCM is both seen as a marginalized system within ‘modern’ hospitals and as a modality worth integrating. Additionally, the impact of such shifts has not been fully explored: the World Health Organization for one focuses only on three angles —practices, products, and practitioners— with regards to traditional medicines. Through ten weeks of fieldwork conducted at an urban hospital in Shanghai, China, this research expands the perspective of existing strategies by looking at integrative care through a fourth lens: patients and families. The understanding of self-care, health-seeking behavior, and non-professional caregiving structures are critical to grasping the significance of traditional medicine for people-centered care. Indeed, those individual and informal health care expectations align with the very spaces and needs that traditional medicine has filled before such ideas of integration. It specifically looks at this issue via three processes that operationalize experiences of care: (1) how aspects of TCM are valued within integrative medicine, (2) how negotiations of care occur between patients and doctors, and (3) how 'good quality' caregiving presents in integrative clinical spaces. This research hopes to lend insight into how culturally embedded traditions, bureaucratic and institutional rationalities, and social patterns of health-seeking behavior influence care to shape illness experiences at the intersection of two medical modalities. This analysis of patients’ clinical and illness experiences serves to enrich the narratives of integrative medical care’s ability to provide patient-centered care to determine how international policies are realized at the individual level. This anthropological study of the integration of Traditional Chinese medicine in local contexts can reveal the extent to which global strategies, as promoted by the WHO and the Chinese government actually align with the expectations and perspectives of patients receiving care. Ultimately, this ethnographic analysis of a local Chinese context hopes to inform global policies regarding the future use and integration of traditional medicines.

Keywords: emergent systems, global health, integrative medicine, traditional Chinese medicine, TCM

Procedia PDF Downloads 123
551 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait

Authors: Obaid AlOtaibi, Salman Hussain

Abstract:

Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.

Keywords: Kuwait, renewable energy, spatial analysis, wind energy

Procedia PDF Downloads 126
550 Acceptability Process of a Congestion Charge

Authors: Amira Mabrouk

Abstract:

This paper deals with the acceptability of urban toll in Tunisia. The price-based regulation, i.e. urban toll, is the outcome of a political process hampered by three-fold objectives: effectiveness, equity and social acceptability. This produces both economic interest groups and functions that are of incongruent preferences. The plausibility of this speculation goes hand in hand with the fact that these economic interest groups are also taxpayers who undeniably perceive urban toll as an additional charge. This wariness is coupled with an inquiry about the conditions of usage, the redistribution of the collected tax revenue and the idea of the leviathan state completes the picture. In a nutshell, if researches related to road congestion proliferate, no de facto legitimacy can be pleaded. Nonetheless, the theory on urban tolls engenders economists’ questioning of ways to reduce negative external effects linked to it. Only then does the urban toll appear to bear an answer to these issues. Undeniably, the urban toll suggests inherent conflicts due to the apparent no-payment principal of a public asset as well as to the social perception of the new measure as a mere additional charge. However, when the main concern is effectiveness is its broad sense and the social well-being, the main factors that determine the acceptability of such a tariff measure along with the type of incentives should be the object of a thorough, in-depth analysis. Before adopting this economic role, one has to recognize the factors that intervene in the acceptability of a congestion toll which brought about a copious number of articles and reports that lacked mostly solid theoretical content. It is noticeable that nowadays uncertainties float over the exact nature of the acceptability process. Accepting a congestion tariff could differ from one era to another, from one region to another and from one population to another, etc. Notably, this article, within a convenient time frame, attempts at bringing into focus a link between the social acceptability of the urban congestion toll and the value of time through a survey method barely employed in Tunisia, that of stated preference method. How can the urban toll, as a tax, be defined, justified and made acceptable? How can an equitable and effective tariff of congestion toll be reached? How can the costs of this urban toll be covered? In what way can we make the redistribution of the urban toll revenue visible and economically equitable? How can the redistribution of the revenue of urban toll compensate the disadvantaged while introducing such a tariff measure? This paper will offer answers to these research questions and it follows the line of contribution of JULES DUPUIT in 1844.

Keywords: congestion charge, social perception, acceptability, stated preferences

Procedia PDF Downloads 261
549 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 153
548 Evaluating Multiple Diagnostic Tests: An Application to Cervical Intraepithelial Neoplasia

Authors: Areti Angeliki Veroniki, Sofia Tsokani, Evangelos Paraskevaidis, Dimitris Mavridis

Abstract:

The plethora of diagnostic test accuracy (DTA) studies has led to the increased use of systematic reviews and meta-analysis of DTA studies. Clinicians and healthcare professionals often consult DTA meta-analyses to make informed decisions regarding the optimum test to choose and use for a given setting. For example, the human papilloma virus (HPV) DNA, mRNA, and cytology can be used for the cervical intraepithelial neoplasia grade 2+ (CIN2+) diagnosis. But which test is the most accurate? Studies directly comparing test accuracy are not always available, and comparisons between multiple tests create a network of DTA studies that can be synthesized through a network meta-analysis of diagnostic tests (DTA-NMA). The aim is to summarize the DTA-NMA methods for at least three index tests presented in the methodological literature. We illustrate the application of the methods using a real data set for the comparative accuracy of HPV DNA, HPV mRNA, and cytology tests for cervical cancer. A search was conducted in PubMed, Web of Science, and Scopus from inception until the end of July 2019 to identify full-text research articles that describe a DTA-NMA method for three or more index tests. Since the joint classification of the results from one index against the results of another index test amongst those with the target condition and amongst those without the target condition are rarely reported in DTA studies, only methods requiring the 2x2 tables of the results of each index test against the reference standard were included. Studies of any design published in English were eligible for inclusion. Relevant unpublished material was also included. Ten relevant studies were finally included to evaluate their methodology. DTA-NMA methods that have been presented in the literature together with their advantages and disadvantages are described. In addition, using 37 studies for cervical cancer obtained from a published Cochrane review as a case study, an application of the identified DTA-NMA methods to determine the most promising test (in terms of sensitivity and specificity) for use as the best screening test to detect CIN2+ is presented. As a conclusion, different approaches for the comparative DTA meta-analysis of multiple tests may conclude to different results and hence may influence decision-making. Acknowledgment: This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Extension of Network Meta-Analysis for the Comparison of Diagnostic Tests ” (MIS 5047640).

Keywords: colposcopy, diagnostic test, HPV, network meta-analysis

Procedia PDF Downloads 120
547 Land Art in Public Spaces Design: Remediation, Prevention of Environmental Risks and Recycling as a Consequence of the Avant-Garde Activity of Landscape Architecture

Authors: Karolina Porada

Abstract:

Over the last 40 years, there has been a trend in landscape architecture which supporters do not perceive the role of pro-ecological or postmodern solutions in the design of public green spaces as an essential goal, shifting their attention to the 'sculptural' shaping of areas with the use of slopes, hills, embankments, and other forms of terrain. This group of designers can be considered avant-garde, which in its activities refers to land art. Initial research shows that such applications are particularly frequent in places of former post-industrial sites and landfills, utilizing materials such as debris and post-mining waste in their construction. Due to the high degradation of the environment surrounding modern man, the brownfields are a challenge and a field of interest for the representatives of landscape architecture avant-garde, who through their projects try to recover lost lands by means of transformations supported by engineering and ecological knowledge to create places where nature can develop again. The analysis of a dozen or so facilities made it possible to come up with an important conclusion: apart from the cultural aspects (including artistic activities), the green areas formally referring to the land are important in the process of remediation of post-industrial sites and waste recycling (e. g. from construction sites). In these processes, there is also a potential for applying the concept of Natural Based Solutions, i.e. solutions allowing for the natural development of the site in such a way as to use it to cope with environmental problems, such as e.g.  air pollution, soil phytoremediation and climate change. The paper presents examples of modern parks, whose compositions are based on shaping the surface of the terrain in a way referring to the land art, at the same time providing an example of brownfields reuse and application of waste recycling.  For the purposes of object analysis, research methods such as historical-interpretation studies, case studies, qualitative research or the method of logical argumentation were used. The obtained results provide information about the role that landscape architecture can have in the process of remediation of degraded areas, at the same time guaranteeing the benefits, such as the shaping of landscapes attractive in terms of visual appearance, low costs of implementation, and improvement of the natural environment quality.

Keywords: brownfields, contemporary parks, landscape architecture, remediation

Procedia PDF Downloads 133
546 Application of Nuclear Magnetic Resonance (1H-NMR) in the Analysis of Catalytic Aquathermolysis: Colombian Heavy Oil Case

Authors: Paola Leon, Hugo Garcia, Adan Leon, Samuel Munoz

Abstract:

The enhanced oil recovery by steam injection was considered a process that only generated physical recovery mechanisms. However, there is evidence of the occurrence of a series of chemical reactions, which are called aquathermolysis, which generates hydrogen sulfide, carbon dioxide, methane, and lower molecular weight hydrocarbons. These reactions can be favored by the addition of a catalyst during steam injection; in this way, it is possible to generate the original oil in situ upgrading through the production increase of molecules of lower molecular weight. This additional effect could increase the oil recovery factor and reduce costs in transport and refining stages. Therefore, this research has focused on the experimental evaluation of the catalytic aquathermolysis on a Colombian heavy oil with 12,8°API. The effects of three different catalysts, reaction time, and temperature were evaluated in a batch microreactor. The changes in the Colombian heavy oil were quantified through nuclear magnetic resonance 1H-NMR. The relaxation times interpretation and the absorption intensity allowed to identify the distribution of the functional groups in the base oil and upgraded oils. Additionally, the average number of aliphatic carbons in alkyl chains, the number of substituted rings, and the aromaticity factor were established as average structural parameters in order to simplify the samples' compositional analysis. The first experimental stage proved that each catalyst develops a different reaction mechanism. The aromaticity factor has an increasing order of the salts used: Mo > Fe > Ni. However, the upgraded oil obtained with iron naphthenate tends to form a higher content of mono-aromatic and lower content of poly-aromatic compounds. On the other hand, the results obtained from the second phase of experiments suggest that the upgraded oils have a smaller difference in the length of alkyl chains in the range of 240º to 270°C. This parameter has lower values at 300°C, which indicates that the alkylation or cleavage reactions of alkyl chains govern at higher reaction temperatures. The presence of condensation reactions is supported by the behavior of the aromaticity factor and the bridge carbons production between aromatic rings (RCH₂). Finally, it is observed that there is a greater dispersion in the aliphatic hydrogens, which indicates that the alkyl chains have a greater reactivity compared to the aromatic structures.

Keywords: catalyst, upgrading, aquathermolysis, steam

Procedia PDF Downloads 89
545 Countering the Bullwhip Effect by Absorbing It Downstream in the Supply Chain

Authors: Geng Cui, Naoto Imura, Katsuhiro Nishinari, Takahiro Ezaki

Abstract:

The bullwhip effect, which refers to the amplification of demand variance as one moves up the supply chain, has been observed in various industries and extensively studied through analytic approaches. Existing methods to mitigate the bullwhip effect, such as decentralized demand information, vendor-managed inventory, and the Collaborative Planning, Forecasting, and Replenishment System, rely on the willingness and ability of supply chain participants to share their information. However, in practice, information sharing is often difficult to realize due to privacy concerns. The purpose of this study is to explore new ways to mitigate the bullwhip effect without the need for information sharing. This paper proposes a 'bullwhip absorption strategy' (BAS) to alleviate the bullwhip effect by absorbing it downstream in the supply chain. To achieve this, a two-stage supply chain system was employed, consisting of a single retailer and a single manufacturer. In each time period, the retailer receives an order generated according to an autoregressive process. Upon receiving the order, the retailer depletes the ordered amount, forecasts future demand based on past records, and places an order with the manufacturer using the order-up-to replenishment policy. The manufacturer follows a similar process. In essence, the mechanism of the model is similar to that of the beer game. The BAS is implemented at the retailer's level to counteract the bullwhip effect. This strategy requires the retailer to reduce the uncertainty in its orders, thereby absorbing the bullwhip effect downstream in the supply chain. The advantage of the BAS is that upstream participants can benefit from a reduced bullwhip effect. Although the retailer may incur additional costs, if the gain in the upstream segment can compensate for the retailer's loss, the entire supply chain will be better off. Two indicators, order variance and inventory variance, were used to quantify the bullwhip effect in relation to the strength of absorption. It was found that implementing the BAS at the retailer's level results in a reduction in both the retailer's and the manufacturer's order variances. However, when examining the impact on inventory variances, a trade-off relationship was observed. The manufacturer's inventory variance monotonically decreases with an increase in absorption strength, while the retailer's inventory variance does not always decrease as the absorption strength grows. This is especially true when the autoregression coefficient has a high value, causing the retailer's inventory variance to become a monotonically increasing function of the absorption strength. Finally, numerical simulations were conducted for verification, and the results were consistent with our theoretical analysis.

Keywords: bullwhip effect, supply chain management, inventory management, demand forecasting, order-to-up policy

Procedia PDF Downloads 50
544 Feasibility of Small Autonomous Solar-Powered Water Desalination Units for Arid Regions

Authors: Mohamed Ahmed M. Azab

Abstract:

The shortage of fresh water is a major problem in several areas of the world such as arid regions and coastal zones in several countries of Arabian Gulf. Fortunately, arid regions are exposed to high levels of solar irradiation most the year, which makes the utilization of solar energy a promising solution to such problem with zero harmful emission (Green System). The main objective of this work is to conduct a feasibility study of utilizing small autonomous water desalination units powered by photovoltaic modules as a green renewable energy resource to be employed in different isolated zones as a source of drinking water for some scattered societies where the installation of huge desalination stations are discarded owing to the unavailability of electric grid. Yanbu City is chosen as a case study where the Renewable Energy Center exists and equipped with all sensors to assess the availability of solar energy all over the year. The study included two types of available water: the first type is brackish well water and the second type is seawater of coastal regions. In the case of well water, two versions of desalination units are involved in the study: the first version is based on day operation only. While the second version takes into consideration night operation also, which requires energy storage system as batteries to provide the necessary electric power at night. According to the feasibility study results, it is found that utilization of small autonomous desalinations unit is applicable and economically accepted in the case of brackish well water. While in the case of seawater the capital costs are extremely high and the cost of desalinated water will not be economically feasible unless governmental subsidies are provided. In addition, the study indicated that, for the same water production, the utilization of energy storage version (day-night) adds additional capital cost for batteries, and extra running cost for their replacement, which makes the unit price not only incompetent with day-only unit but also with conventional units powered by diesel generator (fossil fuel) owing to the low prices of fuel in the kingdom. However, the cost analysis shows that the price of the produced water per cubic meter of day-night unit is similar to that produced from the day-only unit provided that the day-night unit operates theoretically for a longer period of 50%.

Keywords: solar energy, water desalination, reverse osmosis, arid regions

Procedia PDF Downloads 426
543 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 17
542 Evaluation of Dry Matter Yield of Panicum maximum Intercropped with Pigeonpea and Sesbania Sesban

Authors: Misheck Musokwa, Paramu Mafongoya, Simon Lorentz

Abstract:

Seasonal shortages of fodder during the dry season is a major constraint to smallholder livestock farmers in South Africa. To mitigate the shortage of fodder, legume trees can be intercropped with pastures which can diversify the sources of feed and increase the amount of protein for grazing animals. The objective was to evaluate dry matter yield of Panicum maximum and land productivity under different fodder production systems during 2016/17-2017/18 seasons at Empangeni (28.6391° S and 31.9400° E). A randomized complete block design, replicated three times was used, the treatments were sole Panicum maximum, Panicum maximum + Sesbania sesban, Panicum maximum + pigeonpea, sole Sesbania sesban, Sole pigeonpea. Three months S.sesbania seedlings were transplanted whilst pigeonpea was direct seeded at spacing of 1m x 1m. P. maximum seeds were drilled at a respective rate of 7.5 kg/ha having an inter-row spacing of 0.25 m apart. In between rows of trees P. maximum seeds were drilled. The dry matter yield harvesting times were separated by six months’ timeframe. A 0.25 m² quadrant randomly placed on 3 points on the plot was used as sampling area during harvesting P. maximum. There was significant difference P < 0.05 across 3 harvests and total dry matter. P. maximum had higher dry matter yield as compared to both intercrops at first harvest and total. The second and third harvest had no significant difference with pigeonpea intercrop. The results was in this order for all 3 harvest: P. maximum (541.2c, 1209.3b and 1557b) kg ha¹ ≥ P. maximum + pigeonpea (157.2b, 926.7b and 1129b) kg ha¹ > P. maximum + S. sesban (36.3a, 282a and 555a) kg ha¹. Total accumulation of dry matter yield of P. maximum (3307c kg ha¹) > P. maximum + pigeonpea (2212 kg ha¹) ≥ P. maximum + S. sesban (874 kg ha¹). There was a significant difference (P< 0.05) on seed yield for trees. Pigeonpea (1240.3 kg ha¹) ≥ Pigeonpea + P. maximum (862.7 kg ha¹) > S.sesbania (391.9 kg ha¹) ≥ S.sesbania + P. maximum. The Land Equivalent Ratio (LER) was in the following order P. maximum + pigeonpea (1.37) > P. maximum + S. sesban (0.84) > Pigeonpea (0.59) ≥ S. Sesbania (0.57) > P. maximum (0.26). Results indicates that it is beneficial to have P. maximum intercropped with pigeonpea because of higher land productivity. Planting grass with pigeonpea was more beneficial than S. sesban with grass or sole cropping in terms of saving the shortage of arable land. P. maximum + pigeonpea saves a substantial (37%) land which can be subsequently be used for other crop production. Pigeonpea is recommended as an intercrop with P. maximum due to its higher LER and combined production of livestock feed, human food, and firewood. Panicum grass is low in crude protein though high in carbohydrates, there is a need for intercropping it with legume trees. A farmer who buys concentrates can reduce costs by combining P. maximum with pigeonpea this will provide a balanced diet at low cost.

Keywords: fodder, livestock, productivity, smallholder farmers

Procedia PDF Downloads 129