Search results for: healthcare management
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10481

Search results for: healthcare management

6281 Short Teaching Sessions for Emergency Front of Neck Access

Authors: S. M. C. Kelly, A. Hargreaves, S. Hargreaves

Abstract:

Introduction: The Can’t intubate, Can’t ventilate emergency scenario is one which has been shown to be managed badly in the past. Reasons identified included gaps in knowledge of the procedure and the emergency equipment used. We aimed to show an increase in confidence amongst anesthetists and operating department practitioners in the technique following a short tea trolley style teaching intervention. Methods: We carried out the teaching on a one-to-one basis. Two Anaesthetists visited each operating theatre during normal working days. One carried out the teaching session and one took over the intra‐operative care of the patient, releasing the listed anaesthetist for a short teaching session. The teaching was delivered to mixture of students and healthcare professionals, both anaesthetists and anaesthetic practitioners. The equipment includes a trolley, an airway manikin, size 10 scalpel, bougie and size 6.0 tracheal tube. The educator discussed the equipment, performed a demonstration and observed the participants performing the procedure. We asked each person to fill out a pre and post teaching questionnaire, stating their confidence with the procedure. Results: The teaching was delivered to 63 participants in total, which included 21 consultant anaesthetists, 23 trainee doctors and 19 anaesthetic practitioners. The teaching sessions lasted on average 9 minutes (range 5– 15 minutes). All participants reported an increase in confidence in both the equipment and technique in front of neck access. Anaesthetic practitioners reported the greatest increase in confidence (53%), with trainee anaesthetists reporting 27% increase and consultant anaesthetists 22%. Overall, confidence in the performance of emergency front of neck access increased by 31% after the teaching session. Discussion: Short ‘Trolley style’ teaching improves confidence in the equipment and technique used for the emergency front of neck access. This is true for students and for consultant anaesthetists. This teaching style is quick with minimal running costs and is relevant for all anesthetic departments.

Keywords: airway teaching, can't intubate can't ventilate, cricothyroidotomy, front-of-neck

Procedia PDF Downloads 133
6280 A Real-World Evidence Analysis of Associations between Costs, Quality of Life and Disease-Severity Indicators of Alzheimer’s Disease in Thailand

Authors: Khachen Kongpakwattana, Charungthai Dejthevaporn, Orapitchaya Krairit, Piyameth Dilokthornsakul, Devi Mohan, Nathorn Chaiyakunapruk

Abstract:

Background: Although an increase in the burden of Alzheimer’s disease (AD) is evident worldwide, knowledge of costs and health-related quality of life (HR-QoL) associated with AD in Low- and Middle-Income Countries (LMICs) is still lacking. We, therefore, aimed to collect real-world cost and HR-QoL data, and investigate their associations with multiple disease-severity indicators among AD patients in Thailand. Methods: We recruited AD patients aged ≥ 60 years accompanied by their caregivers at a university-affiliated tertiary hospital. A one-time structured interview was conducted to collect disease-severity indicators, HR-QoL and caregiving information using standardized tools. The hospital’s database was used to retrieve healthcare resource utilization occurred over 6 months preceding the interview date. Costs were annualized and stratified based on cognitive status. Generalized linear models were employed to evaluate determinants of costs and HR-QoL. Results: Among 148 community-dwelling patients, average annual total societal costs of AD care were 8,014 US$ [95% Confidence Interval (95% CI): 7,295 US$ - 8,844 US$] per patient. Total costs of patients with severe stage (9,860 US$; 95% CI: 8,785 US$ - 11,328 US$) were almost twice as high as those of mild stage (5,524 US$; 95% CI: 4,649 US$ - 6,593 US$). The major cost driver was direct medical costs, particularly those incurred by AD prescriptions. Functional status was the strongest determinant for both total costs and patient’s HR-QoL (p-value < 0.001). Conclusions: Our real-world findings suggest the distinct major cost driver which results from expensive AD treatment, emphasizing the demand for country-specific cost evidence. Increases in cognitive and functional status are significantly associated with decreases in total costs of AD care and improvement on patient’s HR-QoL.

Keywords: Alzheimer's disease, associations, costs, disease-severity indicators, health-related quality of life

Procedia PDF Downloads 124
6279 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India

Authors: Vinu Elias Jacob, Manoj Kumar Kini

Abstract:

Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.

Keywords: disaster management, resilience, spatial planning, spatial transformations

Procedia PDF Downloads 280
6278 A Literature Review and a Proposed Conceptual Framework for Learning Activities in Business Process Management

Authors: Carin Lindskog

Abstract:

Introduction: Long-term success requires an organizational balance between continuity (exploitation) and change (exploration). The problem of balancing exploitation and exploration is a common issue in studies of organizational learning. In order to better face the tough competition in the face of changes, organizations need to exploit their current business and explore new business fields by developing new capabilities. The purpose of this work in progress is to develop a conceptual framework to shed light on the relevance of 'learning activities', i.e., exploitation and exploration, on different levels. The research questions that will be addressed are as follows: What sort of learning activities are found in the Business Process Management (BPM) field? How can these activities be linked to the individual level, group, level, and organizational level? In the work, a literature review will first be conducted. This review will explore the status of learning activities in the BPM field. An outcome from the literature review will be a conceptual framework of learning activities based on the included publications. The learning activities will be categorized to focus on the categories exploitation, exploration or both and into the levels of individual, group, and organization. The proposed conceptual framework will be a valuable tool for analyzing the research field as well as identification of future research directions. Related Work: BPM has increased in popularity as a way of working to strengthen the quality of the work and meet the demands of efficiency. Due to the increase in BPM popularity, more and more organizations reporting on BPM failure. One reason for this is the lack of knowledge about the extended scope of BPM to other business contexts that include, for example, more creative business fields. Yet another reason for the failures are the fact of the employees’ are resistant to changes. The learning process in an organization is an ongoing cycle of reflection and action and is a process that can be initiated, developed and practiced. Furthermore, organizational learning is multilevel; therefore the theory of organizational learning needs to consider the individual, the group, and the organization level. Learning happens over time and across levels, but it also creates a tension between incorporating new learning (feed-forward) and exploiting or using what has already been learned (feedback). Through feed-forward processes, new ideas and actions move from the individual to the group to the organization level. At the same time, what has already been learned feeds back from the organization to a group to an individual and has an impact on how people act and think.

Keywords: business process management, exploitation, exploration, learning activities

Procedia PDF Downloads 112
6277 Prevalence of Dengue in Sickle Cell Disease in Pre-school Children

Authors: Nikhil A. Gavhane, Sachin Shah, Ishant S. Mahajan, Pawan D. Bahekar

Abstract:

Introduction: Millions of people are affected with dengue fever every year, which drives up healthcare expenses in many low-income countries. Organ failure and other serious symptoms may result. Another worldwide public health problem is sickle cell anaemia, which is most prevalent in Africa, the Caribbean, and Europe. Dengue epidemics have reportedly occurred in locations with a high frequency of sickle cell disease, compounding the health problems in these areas. Aims and Objectives: This study examines dengue infection in sickle cell disease-afflicted pre-schoolers. Method:This Retrospective cohort study examined paediatric patients. Young people with sickle cell disease (SCD), dengue infection, and a control group without SCD or dengue were studied. Data on demographics, SCD consequences, medical treatments, and laboratory findings were gathered to analyse the influence of SCD on dengue severity and clinical outcomes, classified as severe or non-severe by the 2009 WHO classification. Using fever or admission symptoms, the research estimated acute illness duration. Result: Table 1 compares haemoglobin genotype-based dengue episode features in SS, SC, and controls. Table 2 shows that severe dengue cases are older, have longer admission delays, and have particular symptoms. Table 3's multivariate analysis indicates SS genotype's high connection with severe dengue, multiorgan failure, and acute pulmonary problems. Table 4 relates severe dengue to greater white blood cell counts, anaemia, liver enzymes, and reduced lactate dehydrogenase. Conclusion: This study is valuable but confined to hospitalised dengue patients with sickle cell illness. Small cohorts limit comparisons. Further study is needed since findings contradict predictions.

Keywords: dengue, chills, headache, severe myalgia, vomiting, nausea, prostration

Procedia PDF Downloads 54
6276 Development of a Work-Related Stress Management Program Guaranteeing Fitness-For-Duty for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

Human error is one of the most dreaded factors that may result in unexpected accidents, especially in nuclear power plants. For accident prevention, it is quite indispensable to analyze and to manage the influence of any factor which may raise the possibility of human errors. Out of lots factors, stress has been reported to have a significant influence on human performance. Therefore, this research aimed to develop a work-related stress management program which can guarantee Fitness-for-Duty (FFD) of the workers in nuclear power plants, especially those working in main control rooms. Major stress factors were elicited through literal surveys and classified into major categories such as demands, supports, and relationships. To manage those factors, a test and intervention program based on 4-level approaches was developed over the whole employment cycle including selection and screening of workers, job allocation, and job rotation. In addition, a managerial care program was introduced with the concept of Employee-Assistance-Program (EAP) program. Reviews on the program conducted by ex-operators in nuclear power plants showed responses in the affirmative, and suggested additional treatment to guarantee high performance of human workers, not in normal operations but also in emergency situations.

Keywords: human error, work performance, work stress, Fitness-For-Duty (FFD), Employee Assistance Program (EAP)

Procedia PDF Downloads 392
6275 Variability of Surface Air Temperature in Sri Lanka and Its Relation to El Nino Southern Oscillation and Indian Ocean Dipole

Authors: Athdath Waduge Susantha Janaka Kumara, Xiefei Zhi, Zin Mie Mie Sein

Abstract:

Understanding the air temperature variability is crucially important for disaster risk reduction and management. In this study, we used 15 synoptic meteorological stations to assess the spatiotemporal variability of air temperature over Sri Lanka during 1972–2021. The empirical orthogonal function (EOF), Principal component analysis (PCA), Mann-Kendall test, power spectrum analysis and correlation coefficient analysis were used to investigate the long-term trends of air temperature and their possible relation to sea surface temperature (SST) over the region. The results indicate that an increasing trend in air temperature was observed with the abrupt climate change noted in the year 1994. The spatial distribution of EOF1 (63.5%) shows the positive and negative loading dipole patterns from south to northeast, while EOF2 (23.4%) explains warmer (colder) in some parts of central (south and east) areas. The power spectrum of PC1 (PC2) indicates that there is a significant period of 3-4 years (quasi-2 years). Moreover, Indian Ocean Dipole (IOD) provides a strong positive correlation with the air temperature of Sri Lanka, while the EL Nino Southern Oscillation (ENSO) presents a weak negative correlation. Therefore, IOD events led to higher temperatures in the region. This study’s findings can help disaster risk reduction and management in the country.

Keywords: air temperature, interannaul variability, ENSO, IOD

Procedia PDF Downloads 83
6274 Pathogenic Candida Biofilms Producers Involved in Healthcare Associated Infections

Authors: Ouassila Bekkal Brikci Benhabib, Zahia Boucherit Otmani, Kebir Boucherit, A. Seghir

Abstract:

The establishment of intravenous catheters in hospitalized patient is an act common in many clinical situations. These therapeutic tools, from their insertion in the body, represent gateways including fungal germs prone. The latter can generate the growth of biofilms, which can be the cause of fungal infection. Faced with this problem, we conducted a study at the University Hospital of Tlemcen in the neurosurgery unit and aims to isolate and identify Candida yeasts from intravenous catheters. Then test their ability to form biofilms. Materials and methods: 256 patient hospitalized in surgery of the hospital in west Algeria were submitted to this study. All samples were taken from peripheral venous catheters implanted for 72 hours or more days. A total of 31 isolates of Candida species were isolated. MIC and SMIC are determined at 80% inhibition by the test XTT tetrazolium measured at 490 nm. The final concentrations of antifungal agent being between 0.03 and 16 mg / ml for amphotericin B and from 0.015 to 8 mg / mL caspofungin. Results: 31 Candida species isolates from catheters including 14 Candida albicans and 17 Candida non albicans . 21 strains of all the isolates were able to form biofilms. In their form of Planktonic cells, all isolates are 100% susceptible to antifungal agents tested. However, in their state of biofilms, more isolates have become tolerant to the tested antifungals. Conclusion: Candida yeasts isolated from intravascular catheters are considered an important virulence factor in the pathogenesis of infections. Their involvement in catheter-related infections can be disastrous for their potential to generate biofilms. They survive high concentrations of antifungal where treatment failure. Pending the development of a therapeutic approach antibiofilm related to catheters, their mastery is going through: -The risk of infection prevention based on the training and awareness of medical staff, -Strict hygiene and maximum asepsis, and -The choice of material limiting microbial colonization.

Keywords: candida, biofilm, hospital, infection, amphotericin B, caspofungin

Procedia PDF Downloads 308
6273 Effective Public Health Communication: Vaccine Health Messaging with Aboriginal and Torres Strait Islander Peoples

Authors: Maria Karidakis, Barbara Kelly

Abstract:

The challenges precipitated by the advent of COVID-19 have brought to the fore the task governments and key stakeholders are faced with; ensuring public health communication is readily accessible to vulnerable populations. COVID-19 has presented challenges for the provision and reception of timely, accessible, and accurate health information pertaining to vaccine health messaging to Aboriginal and Torres Strait Islander peoples. The aim of this qualitative study was to explore strategies used by Aboriginal-led organisations to improve communication about COVID-19 and vaccination for their communities and to explore how these mediation and outreach strategies were received by community members. We interviewed 6 Aboriginal-led organisations and 15 community members from several states across Australian, and these interviews were analysed thematically. The findings suggest that effective public health communication is enhanced when aFirst nations-led response defines the governance that happens in First Nations communities. Pro-active and self-determining Aboriginal leadership and decision-making helps drive the response to counter a growing trend towards vaccine hesitancy. Other strategies include establishing partnerships with government departments and relevant non-governmental organisations to ensure services are implemented and culturally appropriate. The outcomes of this research will afford policymakers, stakeholders in healthcare, and cultural mediators the capacity to identify strengths and potential problems associated with pandemic health information and to subsequently implement creative and culturally specific solutions that go beyond the provision of written documentation via translation or interpreting. It will also enable governing bodies to adjust multilingual polices and to adopt mediation strategies that will improve information delivery and intercultural services on a national and international level.

Keywords: intercultural communication, qualitative, public health communication, COVID-19, pandemic, mediated communication, first nations people

Procedia PDF Downloads 144
6272 Application of Forward Contract and Crop Insurance as Risk Management Tools of Agriculture: A Case Study in Bangladesh

Authors: M. Bokhtiar Hasan, M. Delowar Hossain, Abu N. M. Wahid

Abstract:

The principal aim of the study is to find out a way to effectively manage the agricultural risks like price volatility, weather risks, and fund shortage. To hedge price volatility, farmers sometimes make contracts with agro-traders but fail to protect themselves effectively due to not having legal framework for such contracts. The study extensively reviews existing literature and find evidence that the majority studies either deal with price volatility or weather risks. If we could address these risks through a single model, it would be more useful to both the farmers and traders. Intrinsically, the authors endeavor in this regard, and the key contribution of this study basically lies in it. Initially, we conduct a small survey aspiring to identify the shortcomings of existing contracts. Later, we propose a model encompassing forward and insurance contracts together where forward contract will be used to hedge price volatility and insurance contract will be used to protect weather risks. Contribution/Originality: The study adds to the existing literature through proposing an integrated model comprising of forward contract and crop insurance which will support both farmers and traders to cope with the agricultural risks like price volatility, weather hazards, and fund shortage. JEL Classifications: O13, Q13

Keywords: agriculture, forward contract, insurance contract, risk management, model

Procedia PDF Downloads 142
6271 Application of GIS Techniques for Analysing Urban Built-Up Growth of Class-I Indian Cities: A Case Study of Surat

Authors: Purba Biswas, Priyanka Dey

Abstract:

Worldwide rapid urbanisation has accelerated city expansion in both developed and developing nations. This unprecedented urbanisation trend due to the increasing population and economic growth has caused challenges for the decision-makers in city planning and urban management. Metropolitan cities, class-I towns, and major urban centres undergo a continuous process of evolution due to interaction between socio-cultural and economic attributes. This constant evolution leads to urban expansion in all directions. Understanding the patterns and dynamics of urban built-up growth is crucial for policymakers, urban planners, and researchers, as it aids in resource management, decision-making, and the development of sustainable strategies to address the complexities associated with rapid urbanisation. Identifying spatio-temporal patterns of urban growth has emerged as a crucial challenge in monitoring and assessing present and future trends in urban development. Analysing urban growth patterns and tracking changes in land use is an important aspect of urban studies. This study analyses spatio-temporal urban transformations and land-use and land cover changes using remote sensing and GIS techniques. Built-up growth analysis has been done for the city of Surat as a case example, using the GIS tools of NDBI and GIS models of the Built-up Urban Density Index and Shannon Entropy Index to identify trends and the geographical direction of transformation from 2005 to 2020. Surat is one of the fastest-growing urban centres in both the state and the nation, ranking as the 4th fastest-growing city globally. This study analyses the dynamics of urban built-up area transformations both zone-wise and geographical direction-wise, in which their trend, rate, and magnitude were calculated for the period of 15 years. This study also highlights the need for analysing and monitoring the urban growth pattern of class-I cities in India using spatio-temporal and quantitative techniques like GIS for improved urban management.

Keywords: urban expansion, built-up, geographic information system, remote sensing, Shannon’s entropy

Procedia PDF Downloads 45
6270 A Holistic Approach of Cross-Cultural Management with Insight from Neuroscience

Authors: Mai Nguyen-Phuong-Mai

Abstract:

This paper incorporates insight from various models, studies and disciplines to construct a framework called the Inverted Pyramid Model. It is argued that such a framework has several advantages: (1) it reduces the shortcomings of the problem-focused approach that dominates the mainstream theories of cross-cultural management. With contributing insight from neuroscience, it suggests that training in business cross-cultural awareness should start with potential synergy emerged from differences instead of the traditional approach that focuses on the liability of foreigners and negative consequences of cultural distance. (2) The framework supports a dynamic and holistic way of analyzing cultural diversity by analyzing four major cultural units (global, national, organizational and group culture). (3) The framework emphasizes the role of individuals –an aspect of culture that is often ignored or regarded as a non-issue in the traditional approach. It is based on the notion that people don’t do business with a country, but work (in)directly with a unique person. And it is at this individual level that culture is made, personally, dynamically, and contextually. Insight from neuroscience provides significant evidence that a person can develop a multicultural mind, confirm and contradict, follow and reshape a culture, even when (s)he was previously an outsider to this culture. With this insight, the paper proposes a revision of the old adage (Think global – Act local) and change it into Think global – Plan local – Act individual.

Keywords: static–dynamic paradigm, cultural diversity, multicultural mind, neuroscience

Procedia PDF Downloads 109
6269 Investigating the Post-Liver Transplant Complications and Their Management in Children Referred to the Children’s Medical Center

Authors: Hosein Alimadadi, Fatemeh Farahmand, Ali Jafarian, Nasir Fakhar, Mohammad Hassan Sohouli, Neda Raeesi

Abstract:

Backgroundsː Regarding the important role of liver transplantation as the only treatment in many cases of end-stage liver disease in children, the aim of this study is to investigate the complications of liver transplantation and their management in children referred to the Children's Medical Center. Methods: This study is a cross-sectional study on pediatric patients who have undergone liver transplants in the years 2016 to 2021. The indication for liver transplantation in this population was confirmed by a pediatric gastroenterologist, and a liver transplant was performed by a transplant surgeon. Finally, information about the patient before and after the transplantation was collected and recorded. Results: A total of 53 patients participated in this study, including 25 (47.2%) boys and 28 (52.8%) girls. The most common causes of liver transplantation were cholestatic and metabolic diseases. The most common early complication of liver transplantation in children was acute cellular rejection (ACR) and anastomotic biliary stricture. The most common late complication in these patients was an infection which was observed in 56.6% of patients. Among the drug side effects, neurotoxicity (convulsions) was seen more in patients, and 15.1% of the transplanted patients died. Conclusion: In this study, the most common early complication of liver transplantation in children was ACR and biliary stricture, and the most common late complication was infection. Neurotoxicity (convulsions) was the most common side effect of drugs.

Keywords: liver transplantation, complication, infection, survival rate

Procedia PDF Downloads 65
6268 Sustainable Supply Chain Management Practices, Challenges, and Opportunities: A Case Study of Small and Medium-Sized Enterprises Within the Oil and Gas Sector

Authors: Igho Ekiugbo, Christos Papanagnou

Abstract:

The energy sector continues to face increased scrutiny due to climate change challenges emanating from the burning of fossil fuels, such as coal, oil, and gas. These climate change challenges have motivated industry practitioners and researchers alike to gain an interest in the way businesses operate. This paper aimed to investigate and assess how small and medium-sized enterprises (SMEs) are reducing the impact of their operations, especially those within their supply chains, by assessing the sustainability practices they have adopted and implemented as well as the benefits and challenges of adopting such practices. Data will be collected from SMEs operating across the downstream oil and gas sector in Nigeria using questionnaire surveys. To analyse the data, confirmatory factor analysis and regression analysis will be performed. This method is deemed more suitable and appropriate for testing predefined measurements of sustainable supply chain practices as contained in the extant literature. Preliminary observations indicate a consensus on the awareness of the sustainability concept amongst the target participants. To the best of our knowledge, this paper is among the first to investigate the sustainability practices of SMEs operating in the Nigerian oil and gas sector and will therefore contribute to the sustainability and circular economic literature.

Keywords: small and medium-sized enterprises, sustainability practices, supply chains, sustainable supply chain management, corporate sustainability, oil and gas, business performance

Procedia PDF Downloads 107
6267 The Effect of Technology in Improving Tourism Cluster Competitveness

Authors: Nancy Ayman Kamal Mohamed Mehrz

Abstract:

Like the economies of other countries in the Mediterranean region, the tourism sector in our country has excellent economic prospects. Tourism companies are building tourism, a sector that faces various challenges in its activities. These problems have made business activities and competition between companies difficult. In this study, which was conducted to identify the problems of the tourism sector in the Central Anatolia region, most of the problems faced by the tourism sector and consumer information on consumer rights were used. The aim is to contribute to the awareness of workers and managers working in the tourism sector and to attract the attention of companies and legislators working in the tourism sector. E-tourism is one of the newest issues in the field of tourism. Infrastructure and Information Technology (or ICT) and partner government and tourism organizations are required to achieve this type of tourism. This study measures the views of managers and tourism managers in Leman City regarding e-tourism; In addition, the effect of the literacy level of the tourism management system on tourist attractions was also examined. This research has been carried out. One of the suburbs of Isfahan province. This research is a documentary research and source material that includes literature and surveys. The results obtained show that if managers use ICT, it can help the development of e-tourism in the region, and increasing managers' views about e-tourism and improving their literacy levels can affect the development of tourism.

Keywords: financial problems, the problems of tourism businesses, tourism businesses, internet, marketing, tourism, tourism management economic competitiveness, enhancing competitiveness

Procedia PDF Downloads 22
6266 An Overbooking Model for Car Rental Service with Different Types of Cars

Authors: Naragain Phumchusri, Kittitach Pongpairoj

Abstract:

Overbooking is a very useful revenue management technique that could help reduce costs caused by either undersales or oversales. In this paper, we propose an overbooking model for two types of cars that can minimize the total cost for car rental service. With two types of cars, there is an upgrade possibility for lower type to upper type. This makes the model more complex than one type of cars scenario. We have found that convexity can be proved in this case. Sensitivity analysis of the parameters is conducted to observe the effects of relevant parameters on the optimal solution. Model simplification is proposed using multiple linear regression analysis, which can help estimate the optimal overbooking level using appropriate independent variables. The results show that the overbooking level from multiple linear regression model is relatively close to the optimal solution (with the adjusted R-squared value of at least 72.8%). To evaluate the performance of the proposed model, the total cost was compared with the case where the decision maker uses a naïve method for the overbooking level. It was found that the total cost from optimal solution is only 0.5 to 1 percent (on average) lower than the cost from regression model, while it is approximately 67% lower than the cost obtained by the naïve method. It indicates that our proposed simplification method using regression analysis can effectively perform in estimating the overbooking level.

Keywords: overbooking, car rental industry, revenue management, stochastic model

Procedia PDF Downloads 156
6265 Kuwait Environmental Remediation Program: Waste Management Data Analytics for Planning and Optimization of Waste Collection

Authors: Aisha Al-Baroud

Abstract:

The United Nations Compensation Commission (UNCC), Kuwait National Focal Point (KNFP) and Kuwait Oil Company (KOC) cooperated in a joint project to undertake comprehensive and collaborative efforts to remediate 26 million m3 of crude oil contaminated soil that had resulted from the Gulf War in 1990/1991. These efforts are referred to as the Kuwait Environmental Remediation Program (KERP). KOC has developed a Total Remediation Solution (TRS) for KERP, which will guide the Remediation projects, comprises of alternative remedial solutions with treatment techniques inclusive of limited landfills for non-treatable soil materials disposal, and relies on treating certain ranges of Total Petroleum Hydrocarbon (TPH) contamination with the most appropriate remediation techniques. The KERP Remediation projects will be implemented within the KOC’s oilfields in North and South East Kuwait. The objectives of this remediation project is to clear land for field development and treat all the oil contaminated features (dry oil lakes, wet oil lakes, and oil contaminated piles) through TRS plan to optimize the treatment processes and minimize the volume of contaminated materials to be placed into landfills. The treatment strategy will comprise of Excavation and Transportation (E&T) of oil contaminated soils from contaminated land to remote treatment areas and to use appropriate remediation technologies or a combination of treatment technologies to achieve remediation target criteria (RTC). KOC has awarded five mega projects to achieve the same and is currently in the execution phase. As a part of the company’s commitment to environment and for the fulfillment of the mandatory HSSEMS procedures, all the Remediation contractors needs to report waste generation data from the various project activities on a monthly basis. Data on waste generation is collected in order to implement cost-efficient and sustainable waste management operations. Data analytics approaches can be built on the top of the data to produce more detailed, and in-time waste generation information for the basis of waste management and collection. The results obtained highlight the potential of advanced data analytic approaches in producing more detailed waste generation information for planning and optimization of waste collection and recycling.

Keywords: waste, tencnolgies, KERP, data, soil

Procedia PDF Downloads 96
6264 A Systematic Review of the Predictors, Mediators and Moderators of the Uncanny Valley Effect in Human-Embodied Conversational Agent Interaction

Authors: Stefanache Stefania, Ioana R. Podina

Abstract:

Background: Embodied Conversational Agents (ECAs) are revolutionizing education and healthcare by offering cost-effective, adaptable, and portable solutions. Research on the Uncanny Valley effect (UVE) involves various embodied agents, including ECAs. Achieving the optimal level of anthropomorphism, no consensus on how to overcome the uncanniness problem. Objectives: This systematic review aims to identify the user characteristics, agent features, and context factors that influence the UVE. Additionally, this review provides recommendations for creating effective ECAs and conducting proper experimental studies. Methods: We conducted a systematic review following the PRISMA 2020 guidelines. We included quantitative, peer-reviewed studies that examined human-ECA interaction. We identified 17,122 relevant records from ACM Digital Library, IEE Explore, Scopus, ProQuest, and Web of Science. The quality of the predictors, mediators, and moderators adheres to the guidelines set by prior systematic reviews. Results: Based on the included studies, it can be concluded that females and younger people perceive the ECA as more attractive. However, inconsistent findings exist in the literature. ECAs characterized by extraversion, emotional stability, and agreeableness are considered more attractive. Facial expressions also play a role in the UVE, with some studies indicating that ECAs with more facial expressions are considered more attractive, although this effect is not consistent across all studies. Few studies have explored contextual factors, but they are nonetheless crucial. The interaction scenario and exposure time are important circumstances in human-ECA interaction. Conclusions: The findings highlight a growing interest in ECAs, which have seen significant developments in recent years. Given this evolving landscape, investigating the risk of the UVE can be a promising line of research.

Keywords: human-computer interaction, uncanny valley effect, embodied conversational agent, systematic review

Procedia PDF Downloads 55
6263 A Causal Model for Environmental Design of Residential Community for Elderly Well-Being in Thailand

Authors: Porntip Ruengtam

Abstract:

This article is an extension of previous research presenting the relevant factors related to environmental perceptions, residential community, and the design of a healing environment, which have effects on the well-being and requirements of Thai elderly. Research methodology began with observations and interviews in three case studies in terms of the management processes and environment design of similar existing projects in Thailand. The interview results were taken to summarize with related theories and literature. A questionnaire survey was designed for data collection to confirm the factors of requirements in a residential community intended for the Thai elderly. A structural equation model (SEM) was formulated to explain the cause-effect factors for the requirements of a residential community for Thai elderly. The research revealed that the requirements of a residential community for Thai elderly were classified into three groups when utilizing a technique for exploratory factor analysis. The factors were comprised of (1) requirements for general facilities and activities, (2) requirements for facilities related to health and security, and (3) requirements for facilities related to physical exercise in the residential community. The results from the SEM showed the background of elderly people had a direct effect on their requirements for a residential community from various aspects. The results should lead to the formulation of policies for design and management of residential communities for the elderly in order to enhance quality of life as well as both the physical and mental health of the Thai elderly.

Keywords: elderly, environmental design, residential community, structural equation modeling

Procedia PDF Downloads 301
6262 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 254
6261 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 104
6260 Modeling The Deterioration Of Road Bridges At The Provincial Level In Laos

Authors: Hatthaphone Silimanotham, Michael Henry

Abstract:

The effective maintenance of road bridge infrastructure is becoming a widely researched topic in the civil engineering field. Deterioration is one of the main issues in bridge performance, and it is necessary to understand how bridges deteriorate to optimally plan budget allocation for bridge maintenance. In Laos, many bridges are in a deteriorated state, which may affect the performance of the bridge. Due to bridge deterioration, the Ministry of Public Works and Transport is interested in the deterioration model to allocate the budget efficiently and support the bridge maintenance planning. A deterioration model can be used to predict the bridge condition in the future based on the observed behavior in the past. This paper analyzes the available inspection data of road bridges on the road classifications network to build deterioration prediction models for the main bridge type found at the provincial level (concrete slab, concrete girder, and steel truss) using probabilistic deterioration modeling by linear regression method. The analysis targets there has three bridge types in the 18 provinces of Laos and estimates the bridge deterioration rating for evaluating the bridge's remaining life. This research thus considers the relationship between the service period and the bridge condition to represent the probability of bridge condition in the future. The results of the study can be used for a variety of bridge management tasks, including maintenance planning, budgeting, and evaluating bridge assets.

Keywords: deterioration model, bridge condition, bridge management, probabilistic modeling

Procedia PDF Downloads 147
6259 Epidemiological Survey of Feline Leukemia Virus in Domestic Cats on Tsushima Island, Japan: Tsushima Leopard Cats Are at Risk

Authors: Isaac Makundi, Kazuo Nishigaki

Abstract:

The Tsushima leopard cat (TLC) Prionailurus bengalensis euptilurus, designated a National Natural Monument of Japan, inhabits Tsushima Island, Nagasaki Prefecture, Japan. TLC is considered a subspecies of P. bengalensis, and lives only on Tsushima Island. TLCs are threatened by various infectious diseases. Feline leukemia virus (FeLV) causes a serious infectious disease with a poor prognosis in cats. Therefore, the transmission of FeLV from Tsushima domestic cats (TDCs) to TLCs may threaten the TLC population. We investigated the FeLV infection status of both TDCs and TLCs on Tsushima Island by screening blood samples for FeLV p27 antigen and using PCR to amplify the full-length FeLV env gene. The prevalence of FeLV was 6.4% in TDCs and 0% in TLCs. We also demonstrated that the virus can replicate in the cells of TLCs, suggesting its potential cross-species transmission. The viruses in TDCs were classified as genotype I/clade 3, which is prevalent on a nearby island, based on previous studies of FeLV genotypes and FeLV epidemiology. The FeLV viruses identified on Tsushima Island can be further divided into two lineages within genotype I/clade 3, which are geographically separated in Kamijima and Shimojima, indicating that FeLV may have been transmitted to Tsushima Island at least twice. Monitoring FeLV infection in the TDC and TLC populations is highly recommended as part of the TLC surveillance and management strategy.

Keywords: epidemiology, Feline leukemia virus, Tsushima Island, wildlife management

Procedia PDF Downloads 194
6258 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome

Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco

Abstract:

Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.

Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index

Procedia PDF Downloads 118
6257 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System

Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes

Abstract:

The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.

Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models

Procedia PDF Downloads 65
6256 A Cloud-Based Federated Identity Management in Europe

Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas

Abstract:

Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).

Keywords: cybersecurity, identity federation, trust, user authentication

Procedia PDF Downloads 155
6255 Optimizing a Hybrid Inventory System with Random Demand and Lead Time

Authors: Benga Ebouele, Thomas Tengen

Abstract:

Implementing either periodic or continuous inventory review model within most manufacturing-companies-supply chains as a management tool may incur higher costs. These high costs affect the system flexibility which in turn affects the level of service required to satisfy customers. However, these effects are not clearly understood because the parameters of both inventory review policies (protection demand interval, order quantity, etc.) are not designed to be fully utilized under different and uncertain conditions such as poor manufacturing, supplies and delivery performance. Coming up with a hybrid model which may combine in some sense the feature of both continuous and a periodic inventory review models should be useful. Therefore, there is a need to build and evaluate such hybrid model on the annual total cost, stock out probability and system’s flexibility in order to search for the most cost effective inventory review model. This work also seeks to find the optimal sets of parameters of inventory management under stochastic condition so as to optimise each policy independently. The results reveal that a continuous inventory system always incurs lesser cost than a periodic (R, S) inventory system, but this difference tends to decrease as time goes by. Although the hybrid inventory is the only one that can yield lesser cost over time, it is not always desirable but also natural to use it in order to help the system to meet high performance specification.

Keywords: demand and lead time randomness, hybrid Inventory model, optimization, supply chain

Procedia PDF Downloads 302
6254 Canada's "Flattened Curve": A Geospatial Temporal Analysis of Canada's Amelioration of the Sars-COV-2 Pandemic Through Coordinated Government Intervention

Authors: John Ahluwalia

Abstract:

As an affluent first-world nation, Canada took swift and comprehensive action during the outbreak of the SARS-CoV-2 (COVID-19) pandemic compared to other countries in the same socio-economic cohort. The United States has stumbled to overcome obstacles most developed nations have faced, which has led to significantly more per capita cases and deaths. The initial outbreaks of COVID-19 occurred in the US and Canada within days of each other and posed similar potentially catastrophic threats to public health, the economy, and governmental stability. On a macro level, events that take place in the US have a direct impact on Canada. For example, both countries tend to enter and exit economic recessions at approximately the same time, they are each other’s largest trading partners, and their currencies are inexorably linked. Why is it that Canada has not shared the same fate as the US (and many other nations) that have realized much worse outcomes relative to the COVID-19 pandemic? Variables intrinsic to Canada’s national infrastructure have been instrumental in the country’s efforts to flatten the curve of COVID-19 cases and deaths. Canada’s coordinated multi-level governmental effort has allowed it to create and enforce policies related to COVID-19 at both the national and provincial levels. Canada’s policy of universal healthcare is another variable. Health care and public health measures are enforced on a provincial level, and it is within each province’s jurisdiction to dictate standards for public safety based on scientific evidence. Rather than introducing confusion and the possibility of competition for resources such as PPE and vaccines, Canada’s multi-level chain of government authority has provided consistent policies supporting national public health and local delivery of medical care. This paper will demonstrate that the coordinated efforts on provincial and federal levels have been the linchpin in Canada’s relative success in containing the deadly spread of the COVID-19 virus.

Keywords: COVID-19, Canada, GIS, temporal analysis, ESRI

Procedia PDF Downloads 137
6253 Early Outcomes and Lessons from the Implementation of a Geriatric Hip Fracture Protocol at a Level 1 Trauma Center

Authors: Peter Park, Alfonso Ayala, Douglas Saeks, Jordan Miller, Carmen Flores, Karen Nelson

Abstract:

Introduction Hip fractures account for more than 300,000 hospital admissions every year. Many present as fragility fractures in geriatric patients with multiple medical comorbidities. Standardized protocols for the multidisciplinary management of this patient population have been shown to improve patient outcomes. A hip fracture protocol was implemented at a Level I Trauma center with a focus on pre-operative medical optimization and early surgical care. This study evaluates the efficacy of that protocol, including the early transition period. Methods A retrospective review was performed of all patients ages 60 and older with isolated hip fractures who were managed surgically between 2020 and 2022. This included patients 1 year prior and 1 year following the implementation of a hip fracture protocol at a Level I Trauma center. Results 530 patients were identified: 249 patients were treated before, and 281 patients were treated after the protocol was instituted. There was no difference in mean age (p=0.35), gender (p=0.3), or Charlson Comorbidity Index (p=0.38) between the cohorts. Following the implementation of the protocol, there were observed increases in time to surgery (27.5h vs. 33.8h, p=0.01), hospital length of stay (6.3d vs. 9.7d, p<0.001), and ED LOS (5.1h vs. 6.2h, p<0.001). There were no differences in in-hospital mortality (2.01% pre vs. 3.20% post, p=0.39) and complication rates (25% pre vs 26% post, p=0.76). A trend towards improved outcomes was seen after the early transition period but failed to yield statistical significance. Conclusion Early medical management and surgical intervention are key determining factors affecting outcomes following fragility hip fractures. The implementation of a hip fracture protocol at this institution has not yet significantly affected these parameters. This could in part be due to the restrictions placed at this institution during the COVID-19 pandemic. Despite this, the time to OR pre-and post-implementation was quicker than figures reported elsewhere in literature. Further longitudinal data will be collected to determine the final influence of this protocol. Significance/Clinical Relevance Given the increasing number of elderly people and the high morbidity and mortality associated with hip fractures in this population finding cost effective ways to improve outcomes in the management of these injuries has the potential to have enormous positive impact for both patients and hospital systems.

Keywords: hip fracture, geriatric, treatment algorithm, preoperative optimization

Procedia PDF Downloads 61
6252 Establishing Quality Evaluation Indicators of Early Education Center for 0~3 Years Old

Authors: Lina Feng

Abstract:

The study aimed at establishing quality evaluation indicators of an early education center for 0~3 years old, and defining the weight system of it. Expert questionnaire and Fuzzy Delphi method were applied. Firstly, in order to ensure the indicators in accordance with the practice of early education, 16 experts were invited as respondents to a preliminary Expert Questionnaire about Quality Evaluation Indicators of Early Education Center for 0~3 Years Old. The indicators were based on relevant studies on quality evaluation indicators of early education centers in China and abroad. Secondly, 20 scholars, kindergarten principals, and educational administrators were invited to form a fuzzy Delphi expert team. The experts’ opinions on the importance of indicators were calculated through triangle fuzzy numbers in order to select appropriate indicators and calculate indicator weights. This procedure resulted in the final Quality Evaluation Indicators of Early education Center for 0~3 Years Old. The Indicators contained three major levels, including 6 first-level indicators, 30 second-level indicators, and 147 third-level indicators. The 6 first-level indicators were health and safety; educational and cultivating activities; development of babies; conditions of the center; management of the center; and collaboration between family and the community. The indicators established by this study could provide suggestions for the high-quality environment for promoting the development of early year children.

Keywords: early education center for 0~3 years old, educational management, fuzzy delphi method, quality evaluation indicator

Procedia PDF Downloads 243