Search results for: mathematical discussion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3710

Search results for: mathematical discussion

680 A Proposal for an Excessivist Social Welfare Ordering

Authors: V. De Sandi

Abstract:

In this paper, we characterize a class of rank-weighted social welfare orderings that we call ”Excessivist.” The Excessivist Social Welfare Ordering (eSWO) judges incomes above a fixed threshold θ as detrimental to society. To accomplish this, the identification of a richness or affluence line is necessary. We employ a fixed, exogenous line of excess. We define an eSWF in the form of a weighted sum of individual’s income. This requires introducing n+1 vectors of weights, one for all possible numbers of individuals below the threshold. To do this, the paper introduces a slight modification of the class of rank weighted class of social welfare function. Indeed, in our excessivist social welfare ordering, we allow the weights to be both positive (for individuals below the line) and negative (for individuals above). Then, we introduce ethical concerns through an axiomatic approach. The following axioms are required: continuity above and below the threshold (Ca, Cb), anonymity (A), absolute aversion to excessive richness (AER), pigou dalton positive weights preserving transfer (PDwpT), sign rank preserving full comparability (SwpFC) and strong pareto below the threshold (SPb). Ca, Cb requires that small changes in two income distributions above and below θ do not lead to changes in their ordering. AER suggests that if two distributions are identical in any respect but for one individual above the threshold, who is richer in the first, then the second should be preferred by society. This means that we do not care about the waste of resources above the threshold; the priority is the reduction of excessive income. According to PDwpT, a transfer from a better-off individual to a worse-off individual despite their relative position to the threshold, without reversing their ranks, leads to an improved distribution if the number of individuals below the threshold is the same after the transfer or the number of individuals below the threshold has increased. SPb holds only for individuals below the threshold. The weakening of strong pareto and our ethics need to be justified; we support them through the notion of comparative egalitarianism and income as a source of power. SwpFC is necessary to ensure that, following a positive affine transformation, an individual does not become excessively rich in only one distribution, thereby reversing the ordering of the distributions. Given the axioms above, we can characterize the class of the eSWO, getting the following result through a proof by contradiction and exhaustion: Theorem 1. A social welfare ordering satisfies the axioms of continuity above and below the threshold, anonymity, sign rank preserving full comparability, aversion to excessive richness, Pigou Dalton positive weight preserving transfer, and strong pareto below the threshold, if and only if it is an Excessivist-social welfare ordering. A discussion about the implementation of different threshold lines reviewing the primary contributions in this field follows. What the commonly implemented social welfare functions have been overlooking is the concern for extreme richness at the top. The characterization of Excessivist Social Welfare Ordering, given the axioms above, aims to fill this gap.

Keywords: comparative egalitarianism, excess income, inequality aversion, social welfare ordering

Procedia PDF Downloads 47
679 The Potential Involvement of Platelet Indices in Insulin Resistance in Morbid Obese Children

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Association between insulin resistance (IR) and hematological parameters has long been a matter of interest. Within this context, body mass index (BMI), red blood cells, white blood cells and platelets were involved in this discussion. Parameters related to platelets associated with IR may be useful indicators for the identification of IR. Platelet indices such as mean platelet volume (MPV), platelet distribution width (PDW) and plateletcrit (PCT) are being questioned for their possible association with IR. The aim of this study was to investigate the association between platelet (PLT) count as well as PLT indices and the surrogate indices used to determine IR in morbid obese (MO) children. A total of 167 children participated in the study. Three groups were constituted. The number of cases was 34, 97 and 36 children in the normal BMI, MO and metabolic syndrome (MetS) groups, respectively. Sex- and age-dependent BMI-based percentile tables prepared by World Health Organization were used for the definition of morbid obesity. MetS criteria were determined. BMI values, homeostatic model assessment for IR (HOMA-IR), alanine transaminase-to-aspartate transaminase ratio (ALT/AST) and diagnostic obesity notation model assessment laboratory (DONMA-lab) index values were computed. PLT count and indices were analyzed using automated hematology analyzer. Data were collected for statistical analysis using SPSS for Windows. Arithmetic mean and standard deviation were calculated. Mean values of PLT-related parameters in both control and study groups were compared by one-way ANOVA followed by Tukey post hoc tests to determine whether a significant difference exists among the groups. The correlation analyses between PLT as well as IR indices were performed. Statistically significant difference was accepted as p-value < 0.05. Increased values were detected for PLT (p < 0.01) and PCT (p > 0.05) in MO group compared to those observed in children with N-BMI. Significant increases for PLT (p < 0.01) and PCT (p < 0.05) were observed in MetS group in comparison with the values obtained in children with N-BMI (p < 0.01). Significantly lower MPV and PDW values were obtained in MO group compared to the control group (p < 0.01). HOMA-IR (p < 0.05), DONMA-lab index (p < 0.001) and ALT/AST (p < 0.001) values in MO and MetS groups were significantly increased compared to the N-BMI group. On the other hand, DONMA-lab index values also differed between MO and MetS groups (p < 0.001). In the MO group, PLT was negatively correlated with MPV and PDW values. These correlations were not observed in the N-BMI group. None of the IR indices exhibited a correlation with PLT and PLT indices in the N-BMI group. HOMA-IR showed significant correlations both with PLT and PCT in the MO group. All of the three IR indices were well-correlated with each other in all groups. These findings point out the missing link between IR and PLT activation. In conclusion, PLT and PCT may be related to IR in addition to their identities as hemostasis markers during morbid obesity. Our findings have suggested that DONMA-lab index appears as the best surrogate marker for IR due to its discriminative feature between morbid obesity and MetS.

Keywords: children, insulin resistance, metabolic syndrome, plateletcrit, platelet indices

Procedia PDF Downloads 94
678 Challenges to Safe and Effective Prescription Writing in the Environment Where Digital Prescribing is Absent

Authors: Prashant Neupane, Asmi Pandey, Mumna Ehsan, Katie Davies, Richard Lowsby

Abstract:

Introduction/Background & aims: Safe and effective prescribing in hospitals, directly and indirectly, impacts the health of the patients. Even though digital prescribing in the National Health Service (NHS), UK has been used in lots of tertiary centers along with district general hospitals, a significant number of NHS trusts are still using paper prescribing. We came across lots of irregularities in our daily clinical practice when we are doing paper prescribing. The main aim of the study was to assess how safely and effectively are we prescribing at our hospital where there is no access to digital prescribing. Method/Summary of work: We conducted a prospective audit in the critical care department at Mid Cheshire Hopsitals NHS Foundation Trust in which 20 prescription charts from different patients were randomly selected over a period of 1 month. We assessed 16 multiple categories from each prescription chart and compared them to the standard trust guidelines on prescription. Results/Discussion: We collected data from 20 different prescription charts. 16 categories were evaluated within each prescription chart. The results showed there was an urgent need for improvement in 8 different sections. In 85% of the prescription chart, all the prescribers who prescribed the medications were not identified. Name, GMC number and signature were absent in the required prescriber identification section of the prescription chart. In 70% of prescription charts, either indication or review date of the antimicrobials was absent. Units of medication were not documented correctly in 65% and the allergic status of the patient was absent in 30% of the charts. The start date of medications was missing and alternations of the medications were not done properly in 35%of charts. The patient's name was not recorded in all desired sections of the chart in 50% of cases and cancellations of the medication were not done properly in 45% of the prescription charts. Conclusion(s): From the audit and data analysis, we assessed the areas in which we needed improvement in prescription writing in the Critical care department. However, during the meetings and conversations with the experts from the pharmacy department, we realized this audit is just a representation of the specialized department of the hospital where access to prescribing is limited to a certain number of prescribers. But if we consider bigger departments of the hospital where patient turnover is much more, the results could be much worse. The findings were discussed in the Critical care MDT meeting where suggestions regarding digital/electronic prescribing were discussed. A poster and presentation regarding safe and effective prescribing were done, awareness poster was prepared and attached alongside every bedside in critical care where it is visible to prescribers. We consider this as a temporary measure to improve the quality of prescribing, however, we strongly believe digital prescribing will help to a greater extent to control weak areas which are seen in paper prescribing.

Keywords: safe prescribing, NHS, digital prescribing, prescription chart

Procedia PDF Downloads 111
677 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: carry save adder Karatsuba multiplication, mid range Karatsuba multiplication, modified FFA and transposed filter, retiming

Procedia PDF Downloads 224
676 Thinking for Writing: Evidence of Language Transfer in Chinese ESL Learners’ Written Narratives

Authors: Nan Yang, Hye Pae

Abstract:

English as a second language (ESL) learners are often observed to have transferred traits of their first languages (L1) and habits of using their L1s to their use of English (second language, L2), and this phenomenon is coined as language transfer. In addition to the transfer of linguistic features (e.g., grammar, vocabulary, etc.), which are relatively easy to observe and quantify, many cross-cultural theorists emphasized on a much subtle and fundamental transfer existing on a higher conceptual level that is referred to as conceptual transfer. Although a growing body of literature in linguistics has demonstrated evidence of L1 transfer in various discourse genres, very limited studies address the underlying conceptual transfer that is happening along with the language transfer, especially with the extended form of spontaneous discourses such as personal narrative. To address this issue, this study situates itself in the context of Chinese ESL learners’ written narratives, examines evidence of L1 conceptual transfer in comparison with native English speakers’ narratives, and provides discussion from the perspective of the conceptual transfer. It is hypothesized that Chinese ESL learners’ English narrative strategies are heavily influenced by the strategies that they use in Chinese as a result of the conceptual transfer. Understanding language transfer cognitively is of great significance in the realm of SLA, as it helps address challenges that ESL learners around the world are facing; allow native English speakers to develop a better understanding about how and why learners’ English is different; and also shed light in ESL pedagogy by providing linguistic and cultural expectations in native English-speaking countries. To achieve the goals, 40 college students were recruited (20 Chinese ESL learners and 20 native English speakers) in the United States, and their written narratives on the prompt 'The most frightening experience' were collected for quantitative discourse analysis. 40 written narratives (20 in Chinese and 20 in English) were collected from Chinese ESL learners, and 20 written narratives were collected from native English speakers. All written narratives were coded according to the coding scheme developed by the authors prior to data collection. Statistical descriptive analyses were conducted, and the preliminary results revealed that native English speakers included more narrative elements such as events and explicit evaluation comparing to Chinese ESL students’ both English and Chinese writings; the English group also utilized more evaluation device (i.e., physical state expressions, indirectly reported speeches, delineation) than Chinese ESL students’ both English and Chinese writings. It was also observed that Chinese ESL students included more orientation elements (i.e., the introduction of time/place, the introduction of character) in their Chinese and English writings than the native English-speaking participants. The findings suggest that a similar narrative strategy was observed in Chinese ESL learners’ Chinese narratives and English narratives, which is considered as the evidence of conceptual transfer from Chinese (L1) to English (L2). The results also indicate that distinct narrative strategies were used by Chinese ESL learners and native English speakers as a result of cross-cultural differences.

Keywords: Chinese ESL learners, language transfer, thinking-for-speaking, written narratives

Procedia PDF Downloads 105
675 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos

Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling

Abstract:

Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.

Keywords: boredom, engagement, music videos, posture, proxemics

Procedia PDF Downloads 155
674 Traditional Medicine and Islamic Holistic Approach in Palliative Care Management of Terminal Illpatient of Cancer

Authors: Mohammed Khalil Ur Rahman, Mohammed Alsharon, Arshad Muktar, Zahid Shaik

Abstract:

Any ailment can go into terminal stages, cancer being one such disease which is many times detected in latent stages. Cancer is often characterized by constitutional symptoms which are agonizing in nature which disturbs patients and their family as well. In order to relieve such intolerable symptoms treatment modality employed is known to be ‘Palliative Care’. The goal of palliative care is to enhance patient’s quality of life by relieving or rather reducing the distressing symptoms of patients such as pain, nausea/ vomiting, anorexia/loss of appetite, excessive salivation, mouth ulcers, weight loss, constipation, oral thrush, emaciation etc. which are due to the effect of disease or due to the undergoing treatment such as chemotherapy, radiation etc. Ayurveda and Unani as well as other traditional medicines is getting more and more international attention in recent years and Ayurveda and Unani holistic perspective of the disease, it seems that there are many herbs and herbomineral preparation which can be employed in the treatment of malignancy and also in palliative care. Though many of them have yet to be scientifically proved as anti-cancerous but there is definitely a positive lead that some of these medications relieve the agonising symptoms thereby making life of the patient easy. Health is viewed in Islam in a holistic way. One of the names of the Quran is al-shifa' meaning ‘that which heals’ or ‘the restorer of health’ to refer to spiritual, intellectual, psychological, and physical health. The general aim of medical science, according to Islam, is to secure and adopt suitable measures which, with Allah’s permission, help to preserve or restore the health of the human body. Islam motivates the Physician to view the patient as one organism. The patient has physical, social, psychological, and spiritual dimensions that must be considered in synthesis with an integrated, holistic approach. Aims & Objectives: - To suggest herbs which are mentioned in Ayurveda Unani with potential palliative activity in case of Cancer patients. - Most of tibb nabawi [Prophetic Medicine] is preventive medicine and must have been divinely inspired. - Spiritual Aspects of Healing: Prayer, dua, recitation of the Quran - Remembrance of Allah play a central role.Materials & Method: Literary review of the herbs supported with experiential evidence will be discussed. Discussion: On the basis of collected data subject will be discussed in length. Conclusion: Will be presented in paper.

Keywords: palliative care, holistic, Ayurvedic and Unani traditional system of medicine, Quran, hadith

Procedia PDF Downloads 332
673 A System Dynamics Model for Analyzing Customer Satisfaction in Healthcare Systems

Authors: Mahdi Bastan, Ali Mohammad Ahmadvand, Fatemeh Soltani Khamsehpour

Abstract:

Health organizations’ sustainable development has nowadays become highly affected by customers’ satisfaction due to significant changes made in the business environment of the healthcare system and emerging of Competitiveness paradigm. In case we look at the hospitals and other health organizations as service providers concerning profit issues, the satisfaction of employees as interior customers, and patients as exterior customers would be of significant importance in health business success. Furthermore, satisfaction rate could be considered in performance assessment of healthcare organizations as a perceived quality measure. Several researches have been carried out in identification of effective factors on patients’ satisfaction in health organizations. However, considering a systemic view, the complex causal relations among many components of healthcare system would be an issue that its acquisition and sustainability requires an understanding of the dynamic complexity, an appropriate cognition of different components, and effective relationships among them resulting ultimately in identifying the generative structure of patients’ satisfaction. Hence, the presenting paper applies system dynamics approaches coherently and methodologically to represent the systemic structure of customers’ satisfaction of a health system involving the constituent components and interactions among them. Then, the results of different policies taken on the system are simulated via developing mathematical models, identifying leverage points, and using scenario making technique and then, the best solutions are presented to improve customers’ satisfaction of the services. The presenting approach supports taking advantage of decision support systems. Additionally, relying on understanding of system behavior Dynamics, the effective policies for improving the health system would be recognized.

Keywords: customer satisfaction, healthcare, scenario, simulation, system dynamics

Procedia PDF Downloads 397
672 Aerodynamic Modeling Using Flight Data at High Angle of Attack

Authors: Rakesh Kumar, A. K. Ghosh

Abstract:

The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.

Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling

Procedia PDF Downloads 429
671 Measuring the Influence of Functional Proximity on Environmental Urban Performance via IMM: Four Study Cases in Milan

Authors: Massimo Tadi, M. Hadi Mohammad Zadeh, Ozge Ogut

Abstract:

Although how cities’ forms are structured is studied, more efforts are needed on systemic comprehensions and evaluations of the urban morphology through quantitative metrics that are able to describe the performance of a city in relation to its formal properties. More research is required in this direction in order to better describe the urban form characteristics and their impact on the environmental performance of cities and to increase their sustainability stewardship. With the aim of developing a better understanding of the built environment’s systemic structure, the intention of this paper is to present a holistic methodology for studying the behavior of the built environment and investigate the methods for measuring the effect of urban structure to the environmental performance. This goal will be pursued through an inquiry into the morphological components of the urban systems and the complex relationships between them. Particularly, this paper focuses on proximity, referring to the proximity of different land-uses, is a concept with which Integrated Modification Methodology (IMM) explains how land-use allocation might affect the choice of mobility in neighborhoods, and especially, encourage or discourage non-motived mobility. This paper uses proximity to demonstrate that the structure attributes can quantifiably relate to the performing behavior in the city. The target is to devise a mathematical pattern from the structural elements and correlate it directly with urban performance indicators concerned with environmental sustainability. The paper presents some results of this rigorous investigation of urban proximity and its correlation with performance indicators in four different areas in the city of Milan, each of them characterized by different morphological features.

Keywords: built environment, ecology, sustainable indicators, sustainability, urban morphology

Procedia PDF Downloads 152
670 The Convention of Culture: A Comprehensive Study on Dispute Resolution Pertaining to Heritage and Related Issues

Authors: Bhargavi G. Iyer, Ojaswi Bhagat

Abstract:

In recent years, there has been a lot of discussion about ethnic imbalance and diversity in the international context. Arbitration is now subject to the hegemony of a small number of people who are constantly reappointed. When a court system becomes exclusionary, the quality of adjudication suffers significantly. In such a framework, there is a misalignment between adjudicators' preconceived views and the interests of the parties, resulting in a biased view of the proceedings. The world is currently witnessing a slew of intellectual property battles around cultural appropriation. The term "cultural appropriation" refers to the industrial west's theft of indigenous culture, usually for fashion, aesthetic, or dramatic purposes. Selena Gomez exemplifies cultural appropriation by commercially using the “bindi,” which is sacred to Hinduism, as a fashion symbol. In another case, Victoria's Secret insulted indigenous peoples' genocide by stealing native Indian headdresses. In the case of yoga, a similar process can be witnessed, with Vedic philosophy being reduced to a type of physical practice. Such a viewpoint is problematic since indigenous groups have worked hard for generations to ensure the survival of their culture, and its appropriation by the western world for purely aesthetic and theatrical purposes is upsetting to those who practise such cultures. Because such conflicts involve numerous jurisdictions, they must be resolved through international arbitration. However, these conflicts are already being litigated, and the aggrieved parties, namely developing nations, do not believe it prudent to use the World Intellectual Property Organization's (WIPO) already established arbitration procedure. This practise, it is suggested in this study, is the outcome of Europe's exclusionary arbitral system, which fails to recognise the non-legal and non-commercial nature of indigenous culture issues. This research paper proposes a more comprehensive, inclusive approach that recognises the non-legal and non-commercial aspects of IP disputes involving cultural appropriation, which can only be achieved through an ethnically balanced arbitration structure. This paper also aspires to expound upon the benefits of arbitration and other means of alternative dispute resolution (ADR) in the context of disputes pertaining to cultural issues; positing that inclusivity is a solution to the existing discord between international practices and localised cultural points of dispute. This paper also hopes to explicate measures that will facilitate ensuring inclusion and ideal practices in the domain of arbitration law, particularly pertaining to cultural heritage and indigenous expression.

Keywords: arbitration law, cultural appropriation, dispute resolution, heritage, intellectual property

Procedia PDF Downloads 132
669 Strategies for Incorporating Intercultural Intelligence into Higher Education

Authors: Hyoshin Kim

Abstract:

Most post-secondary educational institutions have offered a wide variety of professional development programs and resources in order to advance the quality of education. Such programs are designed to support faculty members by focusing on topics such as course design, behavioral learning objectives, class discussion, and evaluation methods. These are based on good intentions and might help both new and experienced educators. However, the fundamental flaw is that these ‘effective methods’ are assumed to work regardless of what we teach and whom we teach. This paper is focused on intercultural intelligence and its application to education. It presents a comprehensive literature review on context and cultural diversity in terms of beliefs, values and worldviews. What has worked well with a group of homogeneous local students may not work well with more diverse and international students. It is because students hold different notions of what is means to learn or know something. It is necessary for educators to move away from certain sets of generic teaching skills, which are based on a limited, particular view of teaching and learning. The main objective of the research is to expand our teaching strategies by incorporating what students bring to the course. There have been a growing number of resources and texts on teaching international students. Unfortunately, they tend to be based on the deficiency model, which treats diversity not as strengths, but as problems to be solved. This view is evidenced by the heavy emphasis on assimilationist approaches. For example, cultural difference is negatively evaluated, either implicitly or explicitly. Therefore the pressure is on culturally diverse students. The following questions reflect the underlying assumption of deficiencies: - How can we make them learn better? - How can we bring them into the mainstream academic culture?; and - How can they adapt to Western educational systems? Even though these questions may be well-intended, there seems to be something fundamentally wrong as the assumption of cultural superiority is embedded in this kind of thinking. This paper examines how educators can incorporate intercultural intelligence into the course design by utilizing a variety of tools such as pre-course activities, peer learning and reflective learning journals. The main goal is to explore ways to engage diverse learners in all aspects of learning. This can be achieved by activities designed to understand their prior knowledge, life experiences, and relevant cultural identities. It is crucial to link course material to students’ diverse interests thereby enhancing the relevance of course content and making learning more inclusive. Internationalization of higher education can be successful only when cultural differences are respected and celebrated as essential and positive aspects of teaching and learning.

Keywords: intercultural competence, intercultural intelligence, teaching and learning, post-secondary education

Procedia PDF Downloads 202
668 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks

Authors: Nafisa Mahbub, Hajo Ribberink

Abstract:

Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.

Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger

Procedia PDF Downloads 42
667 Circle of Learning Using High-Fidelity Simulators Promoting a Better Understanding of Resident Physicians on Point-of-Care Ultrasound in Emergency Medicine

Authors: Takamitsu Kodama, Eiji Kawamoto

Abstract:

Introduction: Ultrasound in emergency room has advantages of safer, faster, repeatable and noninvasive. Especially focused Point-Of-Care Ultrasound (POCUS) is used daily for prompt and accurate diagnoses, for quickly identifying critical and life-threatening conditions. That is why ultrasound has demonstrated its usefulness in emergency medicine. The true value of ultrasound has been once again recognized in recent years. It is thought that all resident physicians working at emergency room should perform an ultrasound scan to interpret signs and symptoms of deteriorating patients in the emergency room. However, a practical education on ultrasound is still in development. To resolve this issue, we established a new educational program using high-fidelity simulators and evaluated the efficacy of this course. Methods: Educational program includes didactic lectures and skill stations in half-day course. Instructor gives a lecture on POCUS such as Rapid Ultrasound in Shock (RUSH) and/or Focused Assessment Transthoracic Echo (FATE) protocol at the beginning of the course. Then, attendees are provided for training of scanning with cooperation of normal simulated patients. In the end, attendees learn how to apply focused POCUS skills at clinical situation using high-fidelity simulators such as SonoSim® (SonoSim, Inc) and SimMan® 3G (Laerdal Medical). Evaluation was conducted through surveillance questionnaires to 19 attendees after two pilot courses. The questionnaires were focused on understanding course concept and satisfaction. Results: All attendees answered the questionnaires. With respect to the degree of understanding, 12 attendees (number of valid responses: 13) scored four or more points out of five points. High-fidelity simulators, especially SonoSim® was highly appreciated to enhance learning how to handle ultrasound at an actual practice site by 11 attendees (number of valid responses: 12). All attendees encouraged colleagues to take this course because the high level of satisfaction was achieved. Discussion: Newly introduced educational course using high-fidelity simulators realizes the circle of learning to deepen the understanding on focused POCUS by gradual stages. SonoSim® can faithfully reproduce scan images with pathologic findings of ultrasound and provide experimental learning for a growth number of beginners such as resident physicians. In addition, valuable education can be provided if it is used combined with SimMan® 3G. Conclusions: Newly introduced educational course using high-fidelity simulators is supposed to be effective and helps in providing better education compared with conventional courses for emergency physicians.

Keywords: point-of-care ultrasound, high-fidelity simulators, education, circle of learning

Procedia PDF Downloads 270
666 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing

Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn

Abstract:

Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.

Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency

Procedia PDF Downloads 101
665 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 187
664 A Multi-Dimensional Neural Network Using the Fisher Transform to Predict the Price Evolution for Algorithmic Trading in Financial Markets

Authors: Cristian Pauna

Abstract:

Trading the financial markets is a widespread activity today. A large number of investors, companies, public of private funds are buying and selling every day in order to make profit. Algorithmic trading is the prevalent method to make the trade decisions after the electronic trading release. The orders are sent almost instantly by computers using mathematical models. This paper will present a price prediction methodology based on a multi-dimensional neural network. Using the Fisher transform, the neural network will be instructed for a low-latency auto-adaptive process in order to predict the price evolution for the next period of time. The model is designed especially for algorithmic trading and uses the real-time price series. It was found that the characteristics of the Fisher function applied at the nodes scale level can generate reliable trading signals using the neural network methodology. After real time tests it was found that this method can be applied in any timeframe to trade the financial markets. The paper will also include the steps to implement the presented methodology into an automated trading system. Real trading results will be displayed and analyzed in order to qualify the model. As conclusion, the compared results will reveal that the neural network methodology applied together with the Fisher transform at the nodes level can generate a good price prediction and can build reliable trading signals for algorithmic trading.

Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, neural network

Procedia PDF Downloads 145
663 Fractal Nature of Granular Mixtures of Different Concretes Formulated with Different Methods of Formulation

Authors: Fatima Achouri, Kaddour Chouicha, Abdelwahab Khatir

Abstract:

It is clear that concrete of quality must be made with selected materials chosen in optimum proportions that remain after implementation, a minimum of voids in the material produced. The different methods of formulations what we use, are based for the most part on a granular curve which describes an ‘optimal granularity’. Many authors have engaged in fundamental research on granular arrangements. A comparison of mathematical models reproducing these granular arrangements with experimental measurements of compactness have to verify that the minimum porosity P according to the following extent granular exactly a power law. So the best compactness in the finite medium are obtained with power laws, such as Furnas, Fuller or Talbot, each preferring a particular setting between 0.20 and 0.50. These considerations converge on the assumption that the optimal granularity Caquot approximates by a power law. By analogy, it can then be analyzed as a granular structure of fractal-type since the properties that characterize the internal similarity fractal objects are reflected also by a power law. Optimized mixtures may be described as a series of installments falling granular stuff to better the tank on a regular hierarchical distribution which would give at different scales, by cascading effects, the same structure to the mix. Likely this model may be appropriate for the entire extent of the size distribution of the components, since the cement particles (and silica fume) correctly deflocculated, micrometric dimensions, to chippings sometimes several tens of millimeters. As part of this research, the aim is to give an illustration of the application of fractal analysis to characterize the granular concrete mixtures optimized for a so-called fractal dimension where different concretes were studying that we proved a fractal structure of their granular mixtures regardless of the method of formulation or the type of concrete.

Keywords: concrete formulation, fractal character, granular packing, method of formulation

Procedia PDF Downloads 243
662 The Impact of Formulate and Implementation Strategy for an Organization to Better Financial Consequences in Malaysian Private Hospital

Authors: Naser Zouri

Abstract:

Purpose: Measures of formulate and implementation strategy shows amount of product rate-market based strategic management category such as courtesy, competence, and compliance to reach the high loyalty of financial ecosystem. Despite, it solves the market place error intention to fair trade organization. Finding: Finding shows the ability of executives’ level of management to motivate and better decision-making to solve the treatments in business organization. However, it made ideal level of each interposition policy for a hypothetical household. Methodology/design. Style of questionnaire about the data collection was selected to survey of both pilot test and real research. Also, divide of questionnaire and using of Free Scale Semiconductor`s between the finance employee was famous of this instrument. Respondent`s nominated basic on non-probability sampling such as convenience sampling to answer the questionnaire. The way of realization costs to performed the questionnaire divide among the respondent`s approximately was suitable as a spend the expenditure to reach the answer but very difficult to collect data from hospital. However, items of research survey was formed of implement strategy, environment, supply chain, employee from impact of implementation strategy on reach to better financial consequences and also formulate strategy, comprehensiveness strategic design, organization performance from impression on formulate strategy and financial consequences. Practical Implication: Dynamic capability approach of formulate and implement strategy focuses on the firm-specific processes through which firms integrate, build, or reconfigure resources valuable for making a theoretical contribution. Originality/ value of research: Going beyond the current discussion, we show that case studies have the potential to extend and refine theory. We present new light on how dynamic capabilities can benefit from case study research by discovering the qualifications that shape the development of capabilities and determining the boundary conditions of the dynamic capabilities approach. Limitation of the study :Present study also relies on survey of methodology for data collection and the response perhaps connection by financial employee was difficult to responds the question because of limitation work place.

Keywords: financial ecosystem, loyalty, Malaysian market error, dynamic capability approach, rate-market, optimization intelligence strategy, courtesy, competence, compliance

Procedia PDF Downloads 290
661 The Two Question Challenge: Embedding the Serious Illness Conversation in Acute Care Workflows

Authors: D. M. Lewis, L. Frisby, U. Stead

Abstract:

Objective: Many patients are receiving invasive treatments in acute care or are dying in hospital without having had comprehensive goals of care conversations. Some of these treatments may not align with the patient’s wishes, may be futile, and may cause unnecessary suffering. While many staff may recognize the benefits of engaging patients and families in Serious Illness Conversations (a goal of care framework developed by Ariadne Labs in Boston), few staff feel confident and/or competent in having these conversations in acute care. Another barrier to having these conversations may be due to a lack of incorporation in the current workflow. An educational exercise, titled the Two Question Challenge, was initiated on four medical units across two Vancouver Coastal Health (VCH) hospitals in attempt to engage the entire interdisciplinary team in asking patients and families questions around goals of care and to improve the documentation of these expressed wishes and preferences. Methods: Four acute care units across two separate hospitals participated in the Two Question Challenge. On each unit, over the course of two eight-hour shifts, all members of the interdisciplinary team were asked to select at least two questions from a selection of nine goals of care questions. They were asked to pose these questions of a patient or family member throughout their shift and then asked to document their conversations in a centralized Advance Care Planning/Goals of Care discussion record in the patient’s chart. A visual representation of conversation outcomes was created to demonstrate to staff and patients the breadth of conversations that took place throughout the challenge. Staff and patients were interviewed about their experiences throughout the challenge. Two palliative approach leads remained present on the units throughout the challenge to support, guide, or role model these conversations. Results: Across four acute care medical units, 47 interdisciplinary staff participated in the Two Question Challenge, including nursing, allied health, and a physician. A total of 88 questions were asked of patients, or their families around goals of care and 50 newly documented goals of care conversations were charted. Two code statuses were changed as a result of the conversations. Patients voiced an appreciation for these conversations and staff were able to successfully incorporate these questions into their daily care. Conclusion: The Two Question Challenge proved to be an effective way of having teams explore the goals of care of patients and families in an acute care setting. Staff felt that they gained confidence and competence. Both staff and patients found these conversations to be meaningful and impactful and felt they were notably different from their usual interactions. Documentation of these conversations in a centralized location that is easily accessible to all care providers increased significantly. Application of the Two Question Challenge in non-medical units or other care settings, such as long-term care facilities or community health units, should be explored in the future.

Keywords: advance care planning, goals of care, interdisciplinary, palliative approach, serious illness conversations

Procedia PDF Downloads 91
660 Modified Clusterwise Regression for Pavement Management

Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella

Abstract:

Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.

Keywords: clusterwise regression, pavement management system, performance model, optimization

Procedia PDF Downloads 238
659 Kinetic Study of Physical Quality Changes on Jumbo Squid (Dosidicus gigas) Slices during Application High-Pressure Impregnation

Authors: Mario Perez-Won, Roberto Lemus-Mondaca, Fernanda Marin, Constanza Olivares

Abstract:

This study presents the simultaneous application of high hydrostatic pressure (HHP) and osmotic dehydration of jumbo squid (Dosidicus gigas) slice. Diffusion coefficients for both components water and solids were improved by the process pressure, being influenced by pressure level. The working conditions were different pressures such as 100, 250, 400 MPa and pressure atmospheric (0.1 MPa) for time intervals from 30 to 300 seconds and a 15% NaCl concentration. The mathematical expressions used for mass transfer simulations both water and salt were those corresponding to Newton, Henderson and Pabis, Page and Weibull models, where the Weibull and Henderson-Pabis models presented the best fitted to the water and salt experimental data, respectively. The values for water diffusivity coefficients varied from 1.62 to 8.10x10⁻⁹ m²/s whereas that for salt varied among 14.18 to 36.07x10⁻⁹ m²/s for selected conditions. Finally, as to quality parameters studied under the range of experimental conditions studied, the treatment at 250 MPa yielded on the samples a minimum hardness, whereas springiness, cohesiveness and chewiness at 100, 250 and 400 MPa treatments presented statistical differences regarding to unpressurized samples. The colour parameters L* (lightness) increased, however, but b* (yellowish) and a* (reddish) parameters decreased when increasing pressure level. This way, samples presented a brighter aspect and a mildly cooked appearance. The results presented in this study can support the enormous potential of hydrostatic pressure application as a technique important for compounds impregnation under high pressure.

Keywords: colour, diffusivity, high pressure, jumbo squid, modelling, texture

Procedia PDF Downloads 332
658 Bulbar Conjunctival Kaposi's Sarcoma Unmasked by Immune Reconstitution Syndrome

Authors: S. Mohd Afzal, R. O'Connell

Abstract:

Kaposi's sarcoma (KS) is the most common HIV-related cancer, and ocular manifestations constitute at least 25% of all KS cases. However, ocular presentations often occur in the context of systemic KS, and isolated lesions are rare. We report a unique case of ocular KS masquerading as subconjunctival haemorrhage, and only developing systemic manifestations after initiation of HIV treatment. Case: A 49-year old man with previous hypertensive stroke and newly diagnosed HIV infection presented with an acutely red left eye following repeated bouts of coughing. Given the convincing history of poorly controlled hypertension and cough, a diagnosis of subconjunctival haemorrhage was made. Over the next week, his ocular lesion began to improve and he subsequently started anti-retroviral therapy. Prior to receiving anti-retroviral therapy, his CD4+ lymphocyte count was 194 cells/mm3 with HIV viral load greater than 1 million/ml. This rapidly improved to a viral load of 150 copies/ml within 2 weeks of starting treatment. However, a few days after starting HIV treatment, his ocular lesion recurred. Ophthalmic examination was otherwise normal. He also developed widespread lymphadenopathy and multiple dark lesions on his torso. Histology and virology confirmed KS, systemically triggered by Immune Reconstitution Syndrome (KS-IRIS). The patient has since undergone chemotherapy successfully. Discussion: Kaposi's sarcoma is an atypical tumour caused by human herpesvirus 8 (HHV-8), also known as Kaposi’s sarcoma-associated herpesvirus (KSHV). In immunosuppressed patients, KSHV can also cause lymphoproliferative disorders such as primary effusion lymphoma and Castleman's disease (in our patient’s case, this was excluded through histological analysis of lymph nodes). KSHV is one of the seven currently known human oncoviruses, and its pathogenesis is poorly understood. Up to 13% of patients with HIV-related KS experience worsening of the disease after starting anti-retroviral treatment, due to a sudden increase in CD4 cell counts. Histology remains the diagnostic gold standard. Current British HIV Association (BHIVA) guidelines recommend treatment using anti-retroviral drugs, with either intralesional vinblastine for local disease or systemic chemotherapy for disseminated KS. Conclusion: This case is unique as ocular KS as initial presentation is rare and our patient's diagnosis was only made after systemic lesions were triggered by immune reconstitution. KS should be considered as an important differential diagnosis for red eyes in all patients at risk of acquiring HIV infection.

Keywords: human herpesvirus 8, human immunodeficiency virus, immune reconstitution syndrome, Kaposi’s sarcoma, Kaposi’s sarcoma-associated herpesvirus

Procedia PDF Downloads 326
657 The Financial Impact of Covid 19 on the Hospitality Industry in New Zealand

Authors: Kay Fielden, Eelin Tan, Lan Nguyen

Abstract:

In this research project, data was gathered at a Covid 19 Conference held in June 2021 from industry leaders who discussed the impact of the global pandemic on the status of the New Zealand hospitality industry. Panel discussions on financials, human resources, health and safety, and recovery were conducted. The themes explored for the finance panel were customer demographics, hospitality sectors, financial practices, government impact, and cost of compliance. The aim was to see how the hospitality industry has responded to the global pandemic and the steps that have been taken for the industry to recover or sustain their business. The main research question for this qualitative study is: What are the factors that have impacted on finance for the hospitality industry in New Zealand due to Covid 19? For financials, literature has been gathered to study global effects, and this is being compared with the data gathered from the discussion panel through the lens of resilience theory. Resilience theory applied to the hospitality industry suggests that the challenges imposed by Covid 19 have been the catalyst for government initiatives, technical innovation, engaging local communities, and boosting confidence. Transformation arising from these ground shifts have been a move towards sustainability, wellbeing, more awareness of climate change, and community engagement. Initial findings suggest that there has been a shift in customer base that has prompted regional accommodation providers to realign offers and to become more flexible to attract and maintain this realigned customer base. Dynamic pricing structures have been required to meet changing customer demographics. Flexible staffing arrangements include sharing staff between different accommodation providers, owners with multiple properties adopting different staffing arrangements, maintaining a good working relationship with the bank, and conserving cash. Uncertain times necessitate changing revenue strategies to cope with external factors. Financial support offered by the government has cushioned the financial downturn for many in the hospitality industry, and managed isolation and quarantine (MIQ) arrangements have offered immediate financial relief for those hotels involved. However, there is concern over the long-term effects. Compliance with mandated health and safety requirements has meant that the hospitality industry has streamlined its approach to meeting those requirements and has invested in customer relations to keep paying customers informed of the health measures in place. Initial findings from this study lie within the resilience theory framework and are consistent with findings from the literature.

Keywords: global pandemic, hospitality industry, new Zealand, resilience

Procedia PDF Downloads 87
656 Violence against Women: A Study on the Aggressors' Profile

Authors: Giovana Privatte Maciera, Jair Izaías Kappann

Abstract:

Introduction: The violence against woman is a complex phenomenon that accompanies the woman throughout her life and is a result of a social, cultural, political and religious construction, based on the differences among the genders. Those differences are felt, mainly, because of the patriarchal system that is still present which just naturalize and legitimate the asymmetry of power. As consequence of the women’s lasting historical and collective effort for a legislation against the impunity of violence against women in the national scenery, it was ordained, in 2006, a law known as Maria da Penha. The law was created as a protective measure for women that were victims of violence and consequently for the punishment of the aggressor. Methodology: Analysis of police inquiries is established by the Police Station of Defense of the Woman of Assis city, by formal authorization of the justice, in the period of 2013 to 2015. For the evaluating of the results will be used the content analysis and the theoretical referential of Psychoanalysis. Results and Discussion: The final analysis of the inquiries demonstrated that the violence against women is reproduced by the society and the aggressor, in most cases it is a member of their own family, mainly the current or former-spouse. The most common kinds of aggression were: the threat bodily harm, and the physical violence, that normally happens accompanied by psychological violence, being the most painful for the victims. The biggest part of the aggressors was white, older than the victim, worker and had primary school. But, unlike the expected, the minority of the aggressors were users of alcohol and/or drugs and possessed children in common with the victim. There is a contrast among the number of victims who already admitted have suffered some type of violence earlier by the same aggressor and the number of victims who has registered the occurrence before. The aggressors often use the discourse of denial in their testimony or try to justify their act like the blame was of the victim. It is believed in the interaction of several factors that can influence the aggressor to commit the abuse, including psychological, personal and sociocultural factors. One hypothesis is that the aggressor has a violence history in the family origin. After the aggressor being judged, condemned or not, usually there is no rehabilitation plan or supervision that enable his change. Conclusions: It has noticed the importance of studying the aggressor’s characteristics and the reasons that took him to commit such violence, making possible the implementation of an appropriate treatment to prevent and reduce the aggressions, as well the creation of programs and actions that enable communication and understanding concerning the theme. This is because the recurrence is still high, since the punitive system is not enough and the law is still ineffective and inefficient in certain aspects and in its own functioning. It is perceived a compulsion in repeat so much for the victims as for the aggressors, because they end involving, almost always, in disturbed and violent relationships, with the relation of subordination-dominance as characteristic.

Keywords: aggressors' profile, gender equality, Maria da Penha law, violence against women

Procedia PDF Downloads 321
655 Diagnostic Delays and Treatment Dilemmas: A Case of Drug-Resistant HIV and Tuberculosis

Authors: Christi Jackson, Chuka Onaga

Abstract:

Introduction: We report a case of delayed diagnosis of extra-pulmonary INH-mono-resistant Tuberculosis (TB) in a South African patient with drug-resistant HIV. Case Presentation: A 36-year old male was initiated on 1st line (NNRTI-based) anti-retroviral therapy (ART) in September 2009 and switched to 2nd line (PI-based) ART in 2011, according to local guidelines. He was following up at the outpatient wellness unit of a public hospital, where he was diagnosed with Protease Inhibitor resistant HIV in March 2016. He had an HIV viral load (HIVVL) of 737000 copies/mL, CD4-count of 10 cells/µL and presented with complaints of productive cough, weight loss, chronic diarrhoea and a septic buttock wound. Several investigations were done on sputum, stool and pus samples but all were negative for TB. The patient was treated with antibiotics and the cough and the buttock wound improved. He was subsequently started on a 3rd-line ART regimen of Darunavir, Ritonavir, Etravirine, Raltegravir, Tenofovir and Emtricitabine in May 2016. He continued losing weight, became too weak to stand unsupported and started complaining of abdominal pain. Further investigations were done in September 2016, including a urine specimen for Line Probe Assay (LPA), which showed M. tuberculosis sensitive to Rifampicin but resistant to INH. A lymph node biopsy also showed histological confirmation of TB. Management and outcome: He was started on Rifabutin, Pyrazinamide and Ethambutol in September 2016, and Etravirine was discontinued. After 6 months on ART and 2 months on TB treatment, his HIVVL had dropped to 286 copies/mL, CD4 improved to 179 cells/µL and he showed clinical improvement. Pharmacy supply of his individualised drugs was unreliable and presented some challenges to continuity of treatment. He successfully completed his treatment in June 2017 while still maintaining virological suppression. Discussion: Several laboratory-related factors delayed the diagnosis of TB, including the unavailability of urine-lipoarabinomannan (LAM) and urine-GeneXpert (GXP) tests at this facility. Once the diagnosis was made, it presented a treatment dilemma due to the expected drug-drug interactions between his 3rd-line ART regimen and his INH-resistant TB regimen, and specialist input was required. Conclusion: TB is more difficult to diagnose in patients with severe immunosuppression, therefore additional tests like urine-LAM and urine-GXP can be helpful in expediting the diagnosis in these cases. Patients with non-standard drug regimens should always be discussed with a specialist in order to avoid potentially harmful drug-drug interactions.

Keywords: drug-resistance, HIV, line probe assay, tuberculosis

Procedia PDF Downloads 151
654 Adobe Attenuation Coefficient Determination and Its Comparison with Other Shielding Materials for Energies Found in Common X-Rays Procedures

Authors: Camarena Rodriguez C. S., Portocarrero Bonifaz A., Palma Esparza R., Romero Carlos N. A.

Abstract:

Adobe is a construction material that fulfills the same function as a conventional brick. Widely used since ancient times, it is present in an appreciable percentage of buildings in Latin America. Adobe is a mixture of clay and sand. The interest in the study of the properties of this material arises due to its presence in the infrastructure of hospital´s radiological services, located in places with low economic resources, for the attenuation of radiation. Some materials such as lead and concrete are the most used for shielding and are widely studied in the literature. The present study will determine the mass attenuation coefficient of Adobe. The minimum required thicknesses for the primary and secondary barriers will be estimated for the shielding of radiological facilities where conventional and dental X-rays are performed. For the experimental procedure, an X-ray source emitted direct radiation towards different thicknesses of an Adobe barrier, and a detector was placed on the other side. For this purpose, an UNFORS Xi solid state detector was used, which collected information on the difference of radiation intensity. The initial parameters of the exposure started at 45 kV; and then the tube tension was varied in increments of 5 kV, reaching a maximum of 125 kV. The X-Ray tube was positioned at a distance of 0.5 m from the surface of the Adobe bricks, and the collimation of the radiation beam was set for an area of 0.15 m x 0.15 m. Finally, mathematical methods were applied to determine the mass attenuation coefficient for different energy ranges. In conclusion, the mass attenuation coefficient for Adobe was determined and the approximate thicknesses of the most common Adobe barriers in the hospital buildings were calculated for their later application in the radiological protection.

Keywords: Adobe, attenuation coefficient, radiological protection, shielding, x-rays

Procedia PDF Downloads 149
653 Anti-Phospholipid Antibody Syndrome Presenting with Seizure, Stroke and Atrial Mass: A Case Report

Authors: Rajish Shil, Amal Alduhoori, Vipin Thomachan, Jamal Teir, Radhakrishnan Renganathan

Abstract:

Background: Antiphospholipid antibody syndrome (APS) has a broad spectrum of thrombotic and non-thrombotic clinical manifestations. We present a case of APS presenting with seizure, stroke, and atrial mass. Case Description: A 38-year-old male presented with headache of 10 days duration and tonic-clonic seizure. The neurological examination was normal. Magnetic resonance imaging of brain showed small acute right cerebellar infarct. Magnetic resonance angiography of brain and neck showed a focal narrowing in the origin of the internal carotid artery bilaterally. Electroencephalogram was normal. He was started on aspirin, atorvastatin, and carbamazepine. Transthoracic and trans-esophageal echocardiography showed a pedunculated and lobular atrial mass, measuring 1 X 1.5 cm, which was freely mobile across mitral valve opening across the left ventricular inflow. Autoimmune screening showed positive Antiphospholipid antibodies in high titer (Cardiolipin IgG > 120 units/ml, B2 glycoprotein IgG 90 units/mL). Anti-nuclear antibody was negative. Erythrocyte sedimentation rate and C-reactive protein levels were normal. Platelet count was low (111 x 109/L). The patient underwent successful surgical removal of the mass, which looked like a thrombotic clot, and Histopathological analysis confirmed it as a fibrinous clot, with no evidence of tumor cells. The patient was started on full anticoagulation treatment and was followed up regularly in the clinic, where our patient did not have any further complications from the disease. Discussion: Our patient was diagnosed to have APS based on the features of high positive anticardiolipin antibody IgG and B2 glycoprotein IgG levels, Stroke, thrombocytopenia, and abnormal echo findings. Thrombotic vegetation can mimic an atrial myxoma on echo. Conclusion: APS can present with neurological and cardiac manifestations, and therefore a high index of suspicion is necessary for a diagnosis of the disease as it can affect both short and long term treatment plans and prognosis. Therefore, in patients presenting with neurological symptoms like seizures, weakness and radiological diagnosis of stroke in a young patient, where atrial masses could be thought to be the cause of stroke, they should be screened for any concomitant findings of thrombocytopenia and/or activated partial thromboplastin time prolongation, which should raise the suspicion of vasculitis, specifically APS to be the primary cause of the clinical presentation.

Keywords: antiphospholipid syndrome, seizures, atrial mass, stroke

Procedia PDF Downloads 102
652 Plastic Pollution: Analysis of the Current Legal Framework and Perspectives on Future Governance

Authors: Giorgia Carratta

Abstract:

Since the beginning of mass production, plastic items have been crucial in our daily lives. Thanks to their physical and chemical properties, plastic materials have proven almost irreplaceable in a number of economic sectors such as packaging, automotive, building and construction, textile, and many others. At the same time, the disruptive consequences of plastic pollution have been progressively brought to light in all environmental compartments. The overaccumulation of plastics in the environment, and its adverse effects on habitats, wildlife, and (most likely) human health, represents a call for action to decision-makers around the globe. From a regulatory perspective, plastic production is an unprecedented challenge at all levels of governance. At the international level, the design of new legal instruments, the amendment of existing ones, and the coordination among the several relevant policy areas requires considerable effort. Under the pressure of both increasing scientific evidence and a concerned public opinion, countries seem to slowly move towards the discussion of a new international ‘plastic treaty.’ However, whether, how, and with which scopes such instrument would be adopted is still to be seen. Additionally, governments are establishing regional-basedstrategies, prone to consider the specificities of the plastic issue in a certain geographical area. Thanks to the new Circular Economy Action Plan, approved in March 2020 by the European Commission, EU countries are slowly but steadily shifting to a carbon neutral, circular economy in the attempt to reduce the pressure on natural resources and, parallelly, facilitate sustainable economic growth. In this context, the EU Plastic Strategy is promising to change the way plastic is designed, produced, used, and treated after consumption. In fact, only in the EU27 Member States, almost 26 million tons of plastic waste are generated herein every year, whose 24,9% is still destined to landfill. Positive effects of the Strategy also include a more effective protection of our environment, especially the marine one, the reduction of greenhouse gas emissions, a reduced need for imported fossil energy sources, more sustainable production and consumption patterns. As promising as it may sound, the road ahead is still long. The need to implement these measures in domestic legislations makes their outcome difficult to predict at the moment. An analysis of the current international and European Union legal framework on plastic pollution, binding, and voluntary instruments included, could serve to detect ‘blind spots’ in the current governance as well as to facilitate the development of policy interventions along the plastic value chain, where it appears more needed.

Keywords: environmental law, European union, governance, plastic pollution, sustainability

Procedia PDF Downloads 97
651 Outcome-Based Education as Mediator of the Effect of Blended Learning on the Student Performance in Statistics

Authors: Restituto I. Rodelas

Abstract:

The higher education has adopted the outcomes-based education from K-12. In this approach, the teacher uses any teaching and learning strategies that enable the students to achieve the learning outcomes. The students may be required to exert more effort and figure things out on their own. Hence, outcomes-based students are assumed to be more responsible and more capable of applying the knowledge learned. Another approach that the higher education in the Philippines is starting to adopt from other countries is blended learning. This combination of classroom and fully online instruction and learning is expected to be more effective. Participating in the online sessions, however, is entirely up to the students. Thus, the effect of blended learning on the performance of students in Statistics may be mediated by outcomes-based education. If there is a significant positive mediating effect, then blended learning can be optimized by integrating outcomes-based education. In this study, the sample will consist of four blended learning Statistics classes at Jose Rizal University in the second semester of AY 2015–2016. Two of these classes will be assigned randomly to the experimental group that will be handled using outcomes-based education. The two classes in the control group will be handled using the traditional lecture approach. Prior to the discussion of the first topic, a pre-test will be administered. The same test will be given as posttest after the last topic is covered. In order to establish equality of the groups’ initial knowledge, single factor ANOVA of the pretest scores will be performed. Single factor ANOVA of the posttest-pretest score differences will also be conducted to compare the performance of the experimental and control groups. When a significant difference is obtained in any of these ANOVAs, post hoc analysis will be done using Tukey's honestly significant difference test (HSD). Mediating effect will be evaluated using correlation and regression analyses. The groups’ initial knowledge are equal when the result of pretest scores ANOVA is not significant. If the result of score differences ANOVA is significant and the post hoc test indicates that the classes in the experimental group have significantly different scores from those in the control group, then outcomes-based education has a positive effect. Let blended learning be the independent variable (IV), outcomes-based education be the mediating variable (MV), and score difference be the dependent variable (DV). There is mediating effect when the following requirements are satisfied: significant correlation of IV to DV, significant correlation of IV to MV, significant relationship of MV to DV when both IV and MV are predictors in a regression model, and the absolute value of the coefficient of IV as sole predictor is larger than that when both IV and MV are predictors. With a positive mediating effect of outcomes-base education on the effect of blended learning on student performance, it will be recommended to integrate outcomes-based education into blended learning. This will yield the best learning results.

Keywords: outcome-based teaching, blended learning, face-to-face, student-centered

Procedia PDF Downloads 281