Search results for: congestive heart failure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3380

Search results for: congestive heart failure

920 Factors That Contribute to Noise Induced Hearing Loss Amongst Employees at the Platinum Mine in Limpopo Province, South Africa

Authors: Livhuwani Muthelo, R. N. Malema, T. M. Mothiba

Abstract:

Long term exposure to excessive noise in the mining industry increases the risk of noise induced hearing loss, with consequences for employee’s health, productivity and the overall quality of life. Objective: The objective of this study was to investigate the factors that contribute to Noise Induced Hearing Loss amongst employees at the Platinum mine in the Limpopo Province, South Africa. Study method: A qualitative, phenomenological, exploratory, descriptive, contextual design was applied in order to explore and describe the contributory factors. Purposive non-probability sampling was used to select 10 male employees who were diagnosed with NIHL in the year 2014 in four mine shafts, and 10 managers who were involved in a Hearing Conservation Programme. The data were collected using semi-structured one-on-one interviews. A qualitative data analysis of Tesch’s approach was followed. Results: The following themes emerged: Experiences and challenges faced by employees in the work environment, hearing protective device factors and management and leadership factors. Hearing loss was caused by partial application of guidelines, policies, and procedures from the Department of Minerals and Energy. Conclusion: The study results indicate that although there are guidelines, policies, and procedures available, failure in the implementation of one element will affect the development and maintenance of employees hearing mechanism. It is recommended that the mine management should apply the guidelines, policies, and procedures and promptly repair the broken hearing protective devices.

Keywords: employees, factors, noise induced hearing loss, noise exposure

Procedia PDF Downloads 127
919 The Fifth Political Theory and Countering Terrorism in the Post 9/11 Era

Authors: Rana Eijaz Ahmad

Abstract:

This paper is going to explain about the Fifth Political Theory that challenges all existing three plus one (Capitalism, Marxism and Fascism + Fourth Political Theory) theories. It says, ‘it is human ambiance evolve any political system to survive instead of borrowing other imported thoughts to live in a specific environment, in which Legitimacy leads to authority and promotes humanism.’ According to this theory, no other state is allowed to dictate or install any political system upon other states. It is the born right of individuals to choose a political system or a set of values that are going to make their structures and functions efficient enough to support the system harmony and counter the negative forces successfully. In the post 9/11 era, it is observed that all existing theories like Capitalism, Marxism, Fascism and Fourth Political Theory remained unsuccessful in resolving the global crisis. The so-called war against terrorism is proved as a war for terrorism and creates a vacuum on the global stage, worsening the crisis. The fifth political theory is an answer to counter terrorism in the twenty-first century. It calls for accountability of the United Nations for its failure in sustaining peace at global level. Therefore, the UN charter is supposed to be implemented in its true letter and spirit. All independent sovereign states have right to evolve their own system to carry out a political system that suits them best for sustaining harmony at home. This is the only way to counter terrorism. This paper is comprised of mixed method. Qualitative, quantitative and comparative methods will be used along with secondary sources. The objective of this paper is to create knowledge for the benefit of human beings with a logical and rational argument. It will help political scientists and scholars in conflict management and countering terrorism on pragmatic grounds.

Keywords: capitalism, fourth political theory, fifth political theory, Marxism, fascism

Procedia PDF Downloads 381
918 Determination of the Pull-Out/ Holding Strength at the Taper-Trunnion Junction of Hip Implants

Authors: Obinna K. Ihesiulor, Krishna Shankar, Paul Smith, Alan Fien

Abstract:

Excessive fretting wear at the taper-trunnion junction (trunnionosis) apparently contributes to the high failure rates of hip implants. Implant wear and corrosion lead to the release of metal particulate debris and subsequent release of metal ions at the taper-trunnion surface. This results in a type of metal poisoning referred to as metallosis. The consequences of metal poisoning include; osteolysis (bone loss), osteoarthritis (pain), aseptic loosening of the prosthesis and revision surgery. Follow up after revision surgery, metal debris particles are commonly found in numerous locations. Background: A stable connection between the femoral ball head (taper) and stem (trunnion) is necessary to prevent relative motions and corrosion at the taper junction. Hence, the importance of component assembly cannot be over-emphasized. Therefore, the aim of this study is to determine the influence of head-stem junction assembly by press fitting and the subsequent disengagement/disassembly on the connection strength between the taper ball head and stem. Methods: CoCr femoral heads were assembled with High stainless hydrogen steel stem (trunnion) by Push-in i.e. press fit; and disengaged by Pull-out test. The strength and stability of the two connections were evaluated by measuring the head pull-out forces according to ISO 7206-10 standards. Findings: The head-stem junction strength linearly increases with assembly forces.

Keywords: wear, modular hip prosthesis, taper head-stem, force assembly and disassembly

Procedia PDF Downloads 400
917 Integrating Explicit Instruction and Problem-Solving Approaches for Efficient Learning

Authors: Slava Kalyuga

Abstract:

There are two opposing major points of view on the optimal degree of initial instructional guidance that is usually discussed in the literature by the advocates of the corresponding learning approaches. Using unguided or minimally guided problem-solving tasks prior to explicit instruction has been suggested by productive failure and several other instructional theories, whereas an alternative approach - using fully guided worked examples followed by problem solving - has been demonstrated as the most effective strategy within the framework of cognitive load theory. An integrated approach discussed in this paper could combine the above frameworks within a broader theoretical perspective which would allow bringing together their best features and advantages in the design of learning tasks for STEM education. This paper represents a systematic review of the available empirical studies comparing the above alternative sequences of instructional methods to explore effects of several possible moderating factors. The paper concludes that different approaches and instructional sequences should coexist within complex learning environments. Selecting optimal sequences depends on such factors as specific goals of learner activities, types of knowledge to learn, levels of element interactivity (task complexity), and levels of learner prior knowledge. This paper offers an outline of a theoretical framework for the design of complex learning tasks in STEM education that would integrate explicit instruction and inquiry (exploratory, discovery) learning approaches in ways that depend on a set of defined specific factors.

Keywords: cognitive load, explicit instruction, exploratory learning, worked examples

Procedia PDF Downloads 126
916 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 404
915 Critical Success Factors Quality Requirement Change Management

Authors: Jamshed Ahmad, Abdul Wahid Khan, Javed Ali Khan

Abstract:

Managing software quality requirements change management is a difficult task in the field of software engineering. Avoiding incoming changes result in user dissatisfaction while accommodating to many requirement changes may delay product delivery. Poor requirements management is solely considered the primary cause of the software failure. It becomes more challenging in global software outsourcing. Addressing success factors in quality requirement change management is desired today due to the frequent change requests from the end-users. In this research study, success factors are recognized and scrutinized with the help of a systematic literature review (SLR). In total, 16 success factors were identified, which significantly impacted software quality requirement change management. The findings show that Proper Requirement Change Management, Rapid Delivery, Quality Software Product, Access to Market, Project Management, Skills and Methodologies, Low Cost/Effort Estimation, Clear Plan and Road Map, Agile Processes, Low Labor Cost, User Satisfaction, Communication/Close Coordination, Proper Scheduling and Time Constraints, Frequent Technological Changes, Robust Model, Geographical distribution/Cultural differences are the key factors that influence software quality requirement change. The recognized success factors and validated with the help of various research methods, i.e., case studies, interviews, surveys and experiments. These factors are then scrutinized in continents, database, company size and period of time. Based on these findings, requirement change will be implemented in a better way.

Keywords: global software development, requirement engineering, systematic literature review, success factors

Procedia PDF Downloads 197
914 The Hepatoprotective Effects of Aquatic Extract of Levesticum Officinale against Paraquat Toxicity of Hepatocytes

Authors: Hasan Afarnegan, Ali Shahraki, Jafar Shahraki

Abstract:

Paraquat is widely used as a strong nitrogen-based herbicide for controlling of weeds in agriculture. This poison is extremely toxic for humans which induces several – organ failure by accumulation in cells and many instances of death occurred due to its poisoning. Paraquat metabolized primarily in the liver. The purpose of this study was to assess the effects of aquatic extract of levisticum officinale on oxidative status and biochemical factors in hepatocytes exposed to paraquat. Our results determined that hepatocytes destruction induced by paraquat is mediated by reactive oxygen species (ROS) production, lipid peroxidation and decrease of mitochondrial membrane potential were significantly (P<0.05) prevented by aquatic extract of Levisicum officinale (100, 200 and 300 µg/ml). These effects of paraquat also prevented via antioxidants and ROS scavengers (α-tocopherol, DMSO, manitol), mitochondrial permeability transition (MPT) pore sealing compound (carnitine).MPT pore sealing compound inhibited the hepatotoxicity, indicating that paraquat induced cell death via mithochondrial pathway. Pretreatment of hepatocytes with aquatic extracts of Levisticum officinale, antioxidants and ROS scavengers also blocked hepatic cell death caused by paraquat, suggesting that oxidative stress may be directly induced decline of mithochondrial membrane potential. In conclusion, paraquat hepatotoxicity can be attributed to oxidative stress and continued by mithochondrial membrane potential disruption. Levisticum officinale aquatic extract, presumably due to its strong antoxidant properties, could protect the destructive effects of paraquat on rat hepatocytes.

Keywords: hepatocyte protection, levisticum officinale, oxidative stress, paraquat

Procedia PDF Downloads 222
913 Using Possibility Books to Develop Creativity Mindsets - a New Pedagogy for Learning Science, Math, and Engineering

Authors: Michael R. Taber, Kristin Stanec

Abstract:

This paper presents year-two of a longitudinal study on implementing Possibility Books into undergraduate courses to develop a student's creativity mindset: tolerating ambiguity, willingness to risk failure, curiosity, and openness to embrace possibility thinking through unexpected connections. Courses involved in this research span disciplines in the natural and social sciences and the humanities. Year one of the project developed indices from which baseline data could be analyzed. The two significant indices ( > 0.7) were "creativity mindset" and "intentional interactions." Preliminary qualitative and quantitative data analysis indicated that students found the new pedagogical intervention as a safe space to learn new strategies, recognize patterns, and define structures through innovative notetaking forms. Possibility Books in Natural Science courses were designed to develop students' conceptualization of science and math. Using Possibility Books in all disciplines provided a space for students to practice divergent thinking (i.e.,Possibilities), convergent thinking (i.e., forms that express meaning), and risk-taking (i.e., the vulnerability associated with expression). Qualitative coding of open responses on a post-survey revealed two major themes: 1) Possibility Books provided a mind space for learning about self, and 2) provided a calming opportunity to connect concepts. Quantitative analysis indicated significant correlations between focused headspace and notetaking (r = 0.555, p < 0.001), focused headspace, and connecting with others (r = 0.405, p < 0.001).

Keywords: pedagogy, science education, learning methods, creativity mindsets

Procedia PDF Downloads 24
912 Realizing the Full Potential of Islamic Banking System: Proposed Suitable Legal Framework for Islamic Banking System in Tanzania

Authors: Maulana Ayoub Ali, Pradeep Kulshrestha

Abstract:

Laws of any given secular state have a huge contribution in the growth of the Islamic banking system because the system uses conventional laws to govern its activities. Therefore, the former should be ready to accommodate the latter in order to make the Islamic banking system work properly without affecting the current conventional banking system and therefore without affecting its system. Islamic financial rules have been practiced since the birth of Islam. Following the recent world economic challenges in the financial sector, a quick rebirth of the contemporary Islamic ethical banking system took place. The coming of the Islamic banking system is due to various reasons including but not limited to the failure of the interest based economy in solving financial problems around the globe. Therefore, the Islamic banking system has been adopted as an alternative banking system in order to recover the highly damaged global financial sector. But the Islamic banking system has been facing a number of challenges which hinder its smooth operation in different parts of the world. It has not been the aim of this paper to discuss other challenges rather than the legal ones, but the same was partly discussed when it was justified that it was proper to do so. Generally, there are so many things which have been discovered in the course of writing this paper. The most important part is the issue of the regulatory and supervisory framework for the Islamic banking system in Tanzania and in other nations is considered to be a crucial part for the development of the Islamic banking industry. This paper analyses what has been observed in the study on that area and recommends for necessary actions to be taken on board in a bid to make Islamic banking system reach its climax of serving the larger community by providing ethical, equitable, affordable, interest-free and society cantered banking system around the globe.

Keywords: Islamic banking, interest free banking, ethical banking, legal framework

Procedia PDF Downloads 149
911 Weibull Cumulative Distribution Function Analysis with Life Expectancy Endurance Test Result of Power Window Switch

Authors: Miky Lee, K. Kim, D. Lim, D. Cho

Abstract:

This paper presents the planning, rationale for test specification derivation, sampling requirements, test facilities, and result analysis used to conduct lifetime expectancy endurance tests on power window switches (PWS) considering thermally induced mechanical stress under diurnal cyclic temperatures during normal operation (power cycling). The detail process of analysis and test results on the selected PWS set were discussed in this paper. A statistical approach to ‘life time expectancy’ was given to the measurement standards dealing with PWS lifetime determination through endurance tests. The approach choice, within the framework of the task, was explained. The present task was dedicated to voltage drop measurement to derive lifetime expectancy while others mostly consider contact or surface resistance. The measurements to perform and the main instruments to measure were fully described accordingly. The failure data from tests were analyzed to conclude lifetime expectancy through statistical method using Weibull cumulative distribution function. The first goal of this task is to develop realistic worst case lifetime endurance test specification because existing large number of switch test standards cannot induce degradation mechanism which makes the switches less reliable. 2nd goal is to assess quantitative reliability status of PWS currently manufactured based on test specification newly developed thru this project. The last and most important goal is to satisfy customer’ requirement regarding product reliability.

Keywords: power window switch, endurance test, Weibull function, reliability, degradation mechanism

Procedia PDF Downloads 235
910 Forming Limit Analysis of DP600-800 Steels

Authors: Marcelo Costa Cardoso, Luciano Pessanha Moreira

Abstract:

In this work, the plastic behaviour of cold-rolled zinc coated dual-phase steel sheets DP600 and DP800 grades is firstly investigated with the help of uniaxial, hydraulic bulge and Forming Limit Curve (FLC) tests. The uniaxial tensile tests were performed in three angular orientations with respect to the rolling direction to evaluate the strain-hardening and plastic anisotropy. True stress-strain curves at large strains were determined from hydraulic bulge testing and fitted to a work-hardening equation. The limit strains are defined at both localized necking and fracture conditions according to Nakajima’s hemispherical punch procedure. Also, an elasto-plastic localization model is proposed in order to predict strain and stress based forming limit curves. The investigated dual-phase sheets showed a good formability in the biaxial stretching and drawing FLC regions. For both DP600 and DP800 sheets, the corresponding numerical predictions overestimated and underestimated the experimental limit strains in the biaxial stretching and drawing FLC regions, respectively. This can be attributed to the restricted failure necking condition adopted in the numerical model, which is not suitable to describe the tensile and shear fracture mechanisms in advanced high strength steels under equibiaxial and biaxial stretching conditions.

Keywords: advanced high strength steels, forming limit curve, numerical modelling, sheet metal forming

Procedia PDF Downloads 372
909 Proposal of Analytical Model for the Seismic Performance Evaluation of Reinforced Concrete Frames with Coupled Cross-laminated Timber Infill Panels

Authors: Velázquez Alejandro, Pradhan Sujan, Yoon Rokhyun, Sanada Yasushi

Abstract:

The utilization of new materials as an alternative solution to decrease the environmental impact of the construction industry has been gaining more relevance in the architectural design and construction industry. One such material is cross-laminated timber (CLT), an engineered timber solution that excels for its faster construction times, workability, lightweight, and capacity for carbon storage. This material is usually used alone for the entire structure or combined with steel frames, but a hybrid with reinforced concrete (RC) is rarer. Since RC is one of the most used materials worldwide, a hybrid with CLT would allow further utilization of the latter, and in the process, it would help reduce the environmental impact of RC construction to achieve a sustainable society, but first, the structural performance of such hybrids must be understood. This paper focuses on proposing a model to predict the seismic performance of RC frames with CLT panels as infills. A series of static horizontal cyclic loading experiments were conducted on two 40% scale specimens of reinforced concrete frames with and without CLT panels at Osaka University, Japan. An analytical model was created to simulate the seismic performance of the RC frame with CLT infill based on the experimental results. The proposed model was verified by comparing the experimental and analytical results, showing that the load-deformation relationship and the failure mechanism agreed well with limited error. Hence, the proposed analytical model can be implemented for the seismic performance evaluation of the RC frames with CLT infill.

Keywords: analytical model, multi spring, performance evaluation, reinforced concrete, rocking mechanism, wooden wall

Procedia PDF Downloads 106
908 High Temperature Deformation Behavior of Al0.2CoCrFeNiMo0.5 High Entropy alloy

Authors: Yasam Palguna, Rajesh Korla

Abstract:

The efficiency of thermally operated systems can be improved by increasing the operating temperature, thereby decreasing the fuel consumption and carbon footprint. Hence, there is a continuous need for replacing the existing materials with new alloys with higher temperature working capabilities. During the last decade, multi principal element alloys, commonly known as high entropy alloys are getting more attention because of their superior high temperature strength along with good high temperature corrosion and oxidation resistance, The present work focused on the microstructure and high temperature tensile behavior of Al0.2CoCrFeNiMo0.5 high entropy alloy (HEA). Wrought Al0.2CoCrFeNiMo0.5 high entropy alloy, produced by vacuum induction melting followed by thermomechanical processing, is tested in the temperature range of 200 to 900oC. It is exhibiting very good resistance to softening with increasing temperature up to 700oC, and thereafter there is a rapid decrease in the strength, especially beyond 800oC, which may be due to simultaneous occurrence of recrystallization and precipitate coarsening. Further, it is exhibiting superplastic kind of behavior with a uniform elongation of ~ 275 % at 900 oC temperature and 1 x 10-3 s-1 strain rate, which may be due to the presence of fine stable equi-axed grains. Strain rate sensitivity of 0.3 was observed, suggesting that solute drag dislocation glide might be the active mechanism during superplastic kind of deformation. Post deformation microstructure suggesting that cavitation at the sigma phase-matrix interface is the failure mechanism during high temperature deformation. Finally, high temperature properties of the present alloy will be compared with the contemporary high temperature materials such as ferritic, austenitic steels, and superalloys.

Keywords: high entropy alloy, high temperature deformation, super plasticity, post-deformation microstructures

Procedia PDF Downloads 165
907 The Adoption of Sustainable Textiles & Smart Apparel Technology for the South African Healthcare Sector

Authors: Winiswa Mavutha

Abstract:

The adoption of sustainable textiles and smart apparel technology is crucial for the South African healthcare sector. It’s all about finding innovative solutions to track patient health and improve overall healthcare delivery. This research focuses on how sustainable textile fibers can be integrated with smart apparel technologies by utilizing embedded sensors and some serious data analytics—to enable real-time monitoring of patients. Smart apparel technology conducts constant monitoring of patients’ heart rate, temperature, and blood pressure, including delivering medication electronically, which enhances patient care and reduces hospital readmissions. Currently, the South African healthcare system has its own set of challenges, such as limited resources and a heavy disease burden. Apparel and textile manufacturers in South Africa can address these challenges while promoting environmental sustainability through waste reduction and decreased reliance on harmful chemicals that are typically utilized in traditional textile manufacturing. The study will emphasize the importance of sustainable practices in the textile supply chain. Additionally, this study will examine the importance of collaborative initiatives among stakeholders—such as government entities healthcare providers, including textile and apparel manufacturers, which promotes an environment that fosters innovation in sustainable smart textiles and apparel technology. If South Africa taps into its local resources and skills, it could be a pioneer in the global South for creating eco-friendly healthcare solutions. This aligns perfectly with global sustainability trends and sustainable development goals. The study will use a mixed-method approach by conducting surveys, focus group interviews, and case studies with healthcare professionals, patients, as well as textile and apparel manufacturers. The utilization of sustainable smart textiles doesn’t only enhance patient care through better monitoring, but it also supports a circular economy with biodegradable fibers and minimal textile waste. There’s a growing acknowledgment in the global healthcare sector about the benefits of smart textiles for personalized medicine, and South Africa has the chance to use this advancement to enhance its healthcare services while also addressing some persistent environmental challenges.

Keywords: smart apparel technologies, sustainable textiles, south African healthcare innovation, technology acceptance model

Procedia PDF Downloads 3
906 Failure of Agriculture Soil following the Passage of Tractors

Authors: Anis Eloud, Sayed Chehaibi

Abstract:

Compaction of agricultural soils as a result of the passage of heavy machinery on the fields is a problem that affects many agronomists and farmers since it results in a loss of yield of most crops. To remedy this, and raise the overall future of the food security challenge, we must study and understand the process of soil degradation. The present review is devoted to understanding the effect of repeated passages on agricultural land. The experiments were performed on a plot of the area of the ESIER, characterized by a clay texture in order to quantify the soil compaction caused by the wheels of the tractor during repeated passages on agricultural land. The test tractor CASE type puissance 110 hp and 5470 kg total mass of 3500 kg including the two rear axles and 1970 kg on the front axle. The state of soil compaction has been characterized by measuring its resistance to penetration by means of a penetrometer and direct manual reading, the density and permeability of the soil. Soil moisture was taken jointly. The measurements are made in the initial state before passing the tractor and after each pass varies from 1 to 7 on the track wheel inflated to 1.5 bar for the rear wheel and broke water to the level of valve and 4 bar for the front wheels. The passages are spaced to the average of one week. The results show that the passage of wheels on a farm tilled soil leads to compaction and the latter increases with the number of passages, especially for the upper 15 cm depth horizons. The first passage is characterized by the greatest effect. However, the effect of other passages do not follow a definite law for the complex behavior of granular media and the history of labor and the constraints it suffers from its formation.

Keywords: wheel traffic, tractor, soil compaction, wheel

Procedia PDF Downloads 482
905 An Analysis of Legal and Ethical Implications of Sports Doping in India

Authors: Prathyusha Samvedam, Hiranmaya Nanda

Abstract:

Doping refers to the practice of using drugs or practices that enhance an athlete's performance. This is a problem that occurs on a worldwide scale and compromises the fairness of athletic tournaments. There are rules that have been created on both the national and international levels in order to prevent doping. However, these rules sometimes contradict one another, and it is possible that they don't do a very good job of prohibiting people from using PEDs. This study will contend that India's inability to comply with specific Code criteria, as well as its failure to satisfy "best practice" standards established by other countries, demonstrates a lack of uniformity in the implementation of anti-doping regulations and processes among nations. Such challenges have the potential to undermine the validity of the anti-doping system, particularly in developing nations like India. This article on the legislative framework in India governing doping in sports is very important. To begin, doping in sports is a significant problem that affects the spirit of fair play and sportsmanship. Moreover, it has the potential to jeopardize the integrity of the sport itself. In addition, the research has the potential to educate policymakers, sports organizations, and other stakeholders about the current legal framework and how well it discourages doping in athletic competitions. This article is divided into four distinct sections. The first section offers an explanation of what doping is and provides some context about its development throughout time. Followed the role of anti-doping authorities and the responsibilities they perform are investigated. Case studies and the research technique that will be employed for the study are in the third section; finally, the results are presented in the last section. In conclusion, doping is a severe problem that endangers the honest competition that exists within sports.

Keywords: sports law, doping, NADA, WADA, performance enhancing drugs, anti-doping bill 2022

Procedia PDF Downloads 72
904 Review of the Safety of Discharge on the First Postoperative Day Following Carotid Surgery: A Retrospective Analysis

Authors: John Yahng, Hansraj Riteesh Bookun

Abstract:

Objective: This was a retrospective cross-sectional study evaluating the safety of discharge on the first postoperative day following carotid surgery - principally carotid endarterectomy. Methods: Between January 2010 to October 2017, 252 patients with mean age of 72 years, underwent carotid surgery by seven surgeons. Their medical records were consulted and their operative as well as complication timelines were databased. Descriptive statistics were used to analyse pooled responses and our indicator variables. The statistical package used was STATA 13. Results: There were 183 males (73%) and the comorbid burden was as follows: ischaemic heart disease (54%), diabetes (38%), hypertension (92%), stage 4 kidney impairment (5%) and current or ex-smoking (77%). The main indications were transient ischaemic attacks (42%), stroke (31%), asymptomatic carotid disease (16%) and amaurosis fugax (8%). 247 carotid endarterectomies (109 with patch arterioplasty, 88 with eversion and transection technique, 50 with endarterectomy only) were performed. 2 carotid bypasses, 1 embolectomy, 1 thrombectomy with patch arterioplasty and 1 excision of a carotid body tumour were also performed. 92% of the cases were performed under general anaesthesia. A shunt was used in 29% of cases. The mean length of stay was 5.1 ± 3.7days with the range of 2 to 22 days. No patient was discharged on day 1. The mean time from admission to surgery was 1.4 ± 2.8 days, ranging from 0 to 19 days. The mean time from surgery to discharge was 2.7 ± 2.0 days with the of range 0 to 14 days. 36 complications were encountered over this period, with 12 failed repairs (5 major strokes, 2 minor strokes, 3 transient ischaemic attacks, 1 cerebral bleed, 1 occluded graft), 11 bleeding episodes requiring a return to the operating theatre, 5 adverse cardiac events, 3 cranial nerve injuries, 2 respiratory complications, 2 wound complications and 1 acute kidney injury. There were no deaths. 17 complications occurred on postoperative day 0, 11 on postoperative day 1, 6 on postoperative day 2 and 2 on postoperative day 3. 78% of all complications happened before the second postoperative day. Out of the complications which occurred on the second or third postoperative day, 4 (1.6%) were bleeding episodes, 1 (0.4%) failed repair , 1 respiratory complication (0.4%) and 1 wound complication (0.4%). Conclusion: Although it has been common practice to discharge patients on the second postoperative day following carotid endarterectomy, we find here that discharge on the first operative day is safe. The overall complication rate is low and most complications are captured before the second postoperative day. We suggest that patients having an uneventful first 24 hours post surgery be discharged on the first day. This should reduce hospital length of stay and the health economic burden.

Keywords: carotid, complication, discharge, surgery

Procedia PDF Downloads 166
903 Making Good Samaritans: An Exploration of Criminal Liability for Failure to Rescue in England and Wales

Authors: Usmaan Siddiqui

Abstract:

In England and Wales, there is no duty to rescue strangers. We will be investigating whether this is correct, and whether we should introduce a Good Samaritan law. In order to explore this, firstly, we will be exploring the nature of our moral duties. How far do our moral duties extend? Do they extend only to our family and friends, or do they also extend to strangers? Secondly, even if there does exist a moral duty, should this duty be enforced by criminal law? To what extent should the criminal law reflect morality? Under English criminal law, the consensus is, that it is not the job of the English criminal law to perfect human behaviour, and whilst the law should prevent us from causing harm, it should not force us to be good. This approach is radically different from many other European countries that actually do have a Good Samaritan law. If there are compelling in principle reasons to introduce a Good Samaritan law how would we deal with the pragmatic institutional constraints? Such a law has been stated as being unworkable in practice and difficult in defining its limits. In order to verify this, we shall carry out a comparative analysis between England and selected states in the US to gauge how successful the Good Samaritan law has been in dealing with these institutional constraints. In terms of methodology, as well as a comparative analysis, we shall also be carrying out a doctrinal analysis exploring what the English criminal law’s position is regarding Omissions. In conclusion, the findings so far are, whilst it is not the job of the law to perfect human behaviour, both respect for the law and the level of social co-operation will be greatly improved if the law encourages morally desirable conduct. Whilst it is possible for society to exist without a duty to assist the distressed, a society which ignores the vulnerable is cold, callous, and uncaring. After all, we all need to face up to the possibility that we may be one day be vulnerable and in need of urgent aid, and it is about time English criminal law, catches up with the majority of Europe and protects the vulnerable.

Keywords: criminal, law, omissions, philosophy

Procedia PDF Downloads 230
902 Electronic Device Robustness against Electrostatic Discharges

Authors: Clara Oliver, Oibar Martinez

Abstract:

This paper is intended to reveal the severity of electrostatic discharge (ESD) effects in electronic and optoelectronic devices by performing sensitivity tests based on Human Body Model (HBM) standard. We explain here the HBM standard in detail together with the typical failure modes associated with electrostatic discharges. In addition, a prototype of electrostatic charge generator has been designed, fabricated, and verified to stress electronic devices, which features a compact high voltage source. This prototype is inexpensive and enables one to do a battery of pre-compliance tests aimed at detecting unexpected weaknesses to static discharges at the component level. Some tests with different devices were performed to illustrate the behavior of the proposed generator. A set of discharges was applied according to the HBM standard to commercially available bipolar transistors, complementary metal-oxide-semiconductor transistors and light emitting diodes. It is observed that high current and voltage ratings in electronic devices not necessarily provide a guarantee that the device will withstand high levels of electrostatic discharges. We have also compared the result obtained by performing the sensitivity tests based on HBM with a real discharge generated by a human. For this purpose, the charge accumulated in the person is monitored, and a direct discharge against the devices is generated by touching them. Every test has been performed under controlled relative humidity conditions. It is believed that this paper can be of interest for research teams involved in the development of electronic and optoelectronic devices which need to verify the reliability of their devices in terms of robustness to electrostatic discharges.

Keywords: human body model, electrostatic discharge, sensitivity tests, static charge monitoring

Procedia PDF Downloads 149
901 Flexural Behavior of Eco-Friendly Prefabricated Low Cost Bamboo Reinforced Wall Panels

Authors: Vishal Puri, Pradipta Chakrabortty, Swapan Majumdar

Abstract:

Precast concrete construction is the most commonly used technique for a rapid construction. This technique is very frequently used in the developed countries. Different guidelines required to utilize the potential of prefabricated construction are still not available in the developing countries. This causes over dependence on in-situ construction procedure which further affects the quality, scheduling, and duration of construction. Also with the ever increasing costs of building materials and their negative impact on the environment it has become imperative to look out for alternate construction materials which are cheap and sustainable. Bamboo and fly ash are alternate construction materials having great potential in the construction industry. Thus there is a great need to develop prefabricated components by utilizing the potential of these materials. Bamboo reinforced beams, bamboo reinforced columns and bamboo arches as researched previously have shown great prospects for prefabricated construction industry. But, many other prefabricated components still need to be studied and widely tested before their utilization in the prefabricated construction industry. In the present study, authors have showcased prefabricated bamboo reinforced wall panel for the prefabricated construction industry. It presents a detailed methodology for the development of such prefabricated panels. It also presents the flexural behavior of such panels as tested under flexural loads following ASTM guidelines. It was observed that these wall panels are much flexible and do not show brittle failure as observed in traditional brick walls. It was observed that prefabricated walls are about 42% cheaper as compared to conventional brick walls. It was also observed that prefabricated walls are considerably lighter in weight and are environment friendly. It was thus concluded that this type of wall panels are an excellent alternative for partition brick walls.

Keywords: bamboo, prefabricated walls, reinforced structure, sustainable infrastructure

Procedia PDF Downloads 311
900 The Extension of the Kano Model by the Concept of Over-Service

Authors: Lou-Hon Sun, Yu-Ming Chiu, Chen-Wei Tao, Chia-Yun Tsai

Abstract:

It is common practice for many companies to ask employees to provide heart-touching service for customers and to emphasize the attitude of 'customer first'. However, services may not necessarily gain praise, and may actually be considered excessive, if customers do not appreciate such behaviors. In reality, many restaurant businesses try to provide as much service as possible without taking into account whether over-provision may lead to negative customer reception. A survey of 894 people in Britain revealed that 49 percent of respondents consider over-attentive waiters the most annoying aspect of dining out. It can be seen that merely aiming to exceed customers’ expectations without actually addressing their needs, only further distances and dissociates the standard of services from the goals of customer satisfaction itself. Over-service is defined, as 'service provided that exceeds customer expectations, or simply that customers deemed redundant, resulting in negative perception'. It was found that customers’ reactions and complaints concerning over-service are not as intense as those against service failures caused by the inability to meet expectations; consequently, it is more difficult for managers to become aware of the existence of over-service. Thus the ability to manage over-service behaviors is a significant topic for consideration. The Kano model classifies customer preferences into five categories: attractive quality attribute, one-dimensional quality attribute, must-be quality attribute, indifferent quality attribute and reverse quality attributes. The model is still very popular for researchers to explore the quality aspects and customer satisfaction. Nevertheless, several studies indicated that Kano’s model could not fully capture the nature of service quality. The concept of over-service can be used to restructure the model and provide a better understanding of the service quality construct. In this research, the structure of Kano's two-dimensional questionnaire will be used to classify the factors into different dimensions. The same questions will be used in the second questionnaire for identifying the over-service experienced of the respondents. The finding of these two questionnaires will be used to analyze the relevance between service quality classification and over-service behaviors. The subjects of this research are customers of fine dining chain restaurants. Three hundred questionnaires will be issued based on the stratified random sampling method. Items for measurement will be derived from DINESERV scale. The tangible dimension of the questionnaire will be eliminated due to this research is focused on the employee behaviors. Quality attributes of the Kano model are often regarded as an instrument for improving customer satisfaction. The concept of over-service can be used to restructure the model and provide a better understanding of service quality construct. The extension of the Kano model will not only develop a better understanding of customer needs and expectations but also enhance the management of service quality.

Keywords: consumer satisfaction, DINESERV, kano model, over-service

Procedia PDF Downloads 161
899 Anthropometric Indices of Obesity and Coronary Artery Atherosclerosis: An Autopsy Study in South Indian population

Authors: Francis Nanda Prakash Monteiro, Shyna Quadras, Tanush Shetty

Abstract:

The association between human physique and morbidity and mortality resulting from coronary artery disease has been studied extensively over several decades. Multiple studies have also been done on the correlation between grade of atherosclerosis, coronary artery diseases and anthropometrical measurements. However, the number of autopsy-based studies drastically reduces this number. It has been suggested that while in living subjects, it would be expensive, difficult, and even harmful to subject them to imaging modalities like CT scans and procedures involving contrast media to study mild atherosclerosis, no such harm is encountered in study of autopsy cases. This autopsy-based study was aimed to correlate the anthropometric measurements and indices of obesity, such as waist circumference (WC), hip circumference (HC), body mass index (BMI) and waist hip ratio (WHR) with the degree of atherosclerosis in the right coronary artery (RCA), main branch of the left coronary artery (LCA) and the left anterior descending artery (LADA) in 95 South Indian origin victims of both the genders between the age of 18 years and 75 years. The grading of atherosclerosis was done according to criteria suggested by the American Heart Association. The study also analysed the correlation of the anthropometric measurements and indices of obesity with the number of coronaries affected with atherosclerosis in an individual. All the anthropometric measurements and the derived indices were found to be significantly correlated to each other in both the genders except for the age, which is found to have a significant correlation only with the WHR. In both the genders severe degree of atherosclerosis was commonly observed in LADA, followed by LCA and RCA. Grade of atherosclerosis in RCA is significantly related to the WHR in males. Grade of atherosclerosis in LCA and LADA is significantly related to the WHR in females. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in males. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in females. Anthropometric measurements/indices of obesity can be an effective means to identify high risk cases of atherosclerosis at an early stage that can be effective in reducing the associated cardiac morbidity and mortality. A person with anthropometric measurements suggestive of mild atherosclerosis can be advised to modify his lifestyle, along with decreasing his exposure to the other risk factors. Those with measurements suggestive of higher degree of atherosclerosis can be subjected to confirmatory procedures to start effective treatment.

Keywords: atherosclerosis, coronary artery disease, indices, obesity

Procedia PDF Downloads 66
898 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques

Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang

Abstract:

Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.

Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE

Procedia PDF Downloads 530
897 New Evaluation of the Richness of Cactus (Opuntia) in Active Biomolecules and their Use in Agri-Food, Cosmetic, and Pharmaceutical

Authors: Lazhar Zourgui

Abstract:

Opuntia species are used as local medicinal interventions for chronic diseases and as food sources, mainly because they possess nutritional properties and biological activities. Opuntia ficus-indica (L.) Mill, commonly known as prickly pear or nopal cactus, is the most economically valuable plant in the Cactaceae family worldwide. It is a tropical or subtropical plant native to tropical and subtropical America, which can grow in arid and semi-arid climates. It belongs to the family of angiosperms dicotyledons Cactaceae of which about 1500 species of cacti are known. The Opuntia plant is distributed throughout the world and has great economic potential. There are differences in the phytochemical composition of Opuntia species between wild and domesticated species and within the same species. It is an interesting source of plant bioactive compounds. Bioactive compounds are compounds with nutritional benefits and are generally classified into phenolic and non-phenolic compounds and pigments. Opuntia species are able to grow in almost all climates, for example, arid, temperate, and tropical climates, and their bioactive compound profiles change depending on the species, cultivar, and climatic conditions. Therefore, there is an opportunity for the discovery of new compounds from different Opuntia cultivars. Health benefits of prickly pear are widely demonstrated: There is ample evidence of the health benefits of consuming prickly pear due to its source of nutrients and vitamins and its antioxidant properties due to its content of bioactive compounds. In addition, prickly pear is used in the treatment of hyperglycemia and high cholesterol levels, and its consumption is linked to a lower incidence of coronary heart disease and certain types of cancer. It may be effective in insulin-independent type 2 diabetes mellitus. Opuntia ficus-Indica seed oil has shown potent antioxidant and prophylactic effects. Industrial applications of these bioactive compounds are increasing. In addition to their application in the pharmaceutical industries, bioactive compounds are used in the food industry for the production of nutraceuticals and new food formulations (juices, drinks, jams, sweeteners). In my lecture, I will review in a comprehensive way the phytochemical, nutritional, and bioactive compound composition of the different aerial and underground parts of Opuntia species. The biological activities and applications of Opuntia compounds are also discussed.

Keywords: medicinal plants, cactus, Opuntia, actives biomolecules, biological activities

Procedia PDF Downloads 106
896 Impact of Diabetes Mellitus Type 2 on Clinical In-Stent Restenosis in First Elective Percutaneous Coronary Intervention Patients

Authors: Leonard Simoni, Ilir Alimehmeti, Ervina Shirka, Endri Hasimi, Ndricim Kallashi, Verona Beka, Suerta Kabili, Artan Goda

Abstract:

Background: Diabetes Mellitus type 2, small vessel calibre, stented length of vessel, complex lesion morphology, and prior bypass surgery have resulted risk factors for In-Stent Restenosis (ISR). However, there are some contradictory results about body mass index (BMI) as a risk factor for ISR. Purpose: We want to identify clinical, lesional and procedural factors that can predict clinical ISR in our patients. Methods: Were enrolled 759 patients who underwent first-time elective PCI with Bare Metal Stents (BMS) from September 2011 to December 2013 in our Department of Cardiology and followed them for at least 1.5 years with a median of 862 days (2 years and 4 months). Only the patients re-admitted with ischemic heart disease underwent control coronary angiography but no routine angiographic control was performed. Patients were categorized in ISR and non-ISR groups and compared between them. Multivariate analysis - Binary Logistic Regression: Forward Conditional Method was used to identify independent predictive risk factors. P was considered statistically significant when <0.05. Results: ISR compared to non-ISR individuals had a significantly lower BMI (25.7±3.3 vs. 26.9±3.7, p=0.004), higher risk anatomy (LM + 3-vessel CAD) (23% vs. 14%, p=0.03), higher number of stents/person used (2.1±1.1 vs. 1.75±0.96, p=0.004), greater length of stents/person used (39.3±21.6 vs. 33.3±18.5, p=0.01), and a lower use of clopidogrel and ASA (together) (95% vs. 99%, p=0.012). They also had a higher, although not statistically significant, prevalence of Diabetes Mellitus (42% vs. 32%, p=0.072) and a greater number of treated vessels (1.36±0.5 vs. 1.26±0.5, p=0.08). In the multivariate analysis, Diabetes Mellitus type 2 and multiple stents used were independent predictors risk factors for In-Stent Restenosis, OR 1.66 [1.03-2.68], p=0.039, and OR 1.44 [1.16-1.78,] p=0.001, respectively. On the other side higher BMI and use of clopidogrel and ASA together resulted protective factors OR 0.88 [0.81-0.95], p=0.001 and OR 0.2 [0.06-0.72] p=0.013, respectively. Conclusion: Diabetes Mellitus and multiple stents are strong predictive risk factors, whereas the use of clopidogrel and ASA together are protective factors for clinical In-Stent Restenosis. Paradoxically High BMI is a protective factor for In-stent Restenosis, probably related to a larger diameter of vessels and consequently a larger diameter of stents implanted in these patients. Further studies are needed to clarify this finding.

Keywords: body mass index, diabetes mellitus, in-stent restenosis, percutaneous coronary intervention

Procedia PDF Downloads 210
895 Overview Studies of High Strength Self-Consolidating Concrete

Authors: Raya Harkouss, Bilal Hamad

Abstract:

Self-Consolidating Concrete (SCC) is considered as a relatively new technology created as an effective solution to problems associated with low quality consolidation. A SCC mix is defined as successful if it flows freely and cohesively without the intervention of mechanical compaction. The construction industry is showing high tendency to use SCC in many contemporary projects to benefit from the various advantages offered by this technology. At this point, a main question is raised regarding the effect of enhanced fluidity of SCC on the structural behavior of high strength self-consolidating reinforced concrete. A three phase research program was conducted at the American University of Beirut (AUB) to address this concern. The first two phases consisted of comparative studies conducted on concrete and mortar mixes prepared with second generation Sulphonated Naphtalene-based superplasticizer (SNF) or third generation Polycarboxylate Ethers-based superplasticizer (PCE). The third phase of the research program investigates and compares the structural performance of high strength reinforced concrete beam specimens prepared with two different generations of superplasticizers that formed the unique variable between the concrete mixes. The beams were designed to test and exhibit flexure, shear, or bond splitting failure. The outcomes of the experimental work revealed comparable resistance of beam specimens cast using self-compacting concrete and conventional vibrated concrete. The dissimilarities in the experimental values between the SCC and the control VC beams were minimal, leading to a conclusion, that the high consistency of SCC has little effect on the flexural, shear and bond strengths of concrete members.

Keywords: self-consolidating concrete (SCC), high-strength concrete, concrete admixtures, mechanical properties of hardened SCC, structural behavior of reinforced concrete beams

Procedia PDF Downloads 255
894 Impacts of Computer Assisted Instruction and Gender on High-Flyers Pre-Service Teachers' Attitude towards Agricultural Economics in Southwest Nigeria

Authors: Alice Morenike Olagunju, Olufemi A. Fakolade, Abiodun Ezekiel Adesina, Olufemi Akinloye Bolaji, Oriyomi Rabiu

Abstract:

The use of computer-assisted instruction(CAI) has been suggested as a way out of the problem of Colleges of Education (CoE) in Southwest, Nigeria persistent high failure rate in and negative attitude towards Agricultural Economics (AE).The impacts of this are yet unascertained on high-flyers. This study, therefore, determined the impacts of CAI onhigh-flyers pre-service teachers’ attitude towards AE concepts in Southwest, Nigeria. The study adopted pretest-posttest, control group, quasi-experimental design. Six CoE with e-library facilities were purposively selected. Fourty-nine 200 level Agricultural education students offering introduction to AE course across the six CoE were participants. The participants were assigned to two groups (CAI, 22 and control, 27). Treatment lasted eight weeks. The AE Attitude Scale(r=0.80), Instructional guides and Teacher Performance Assessment Sheets were used for data collection. Data were analysed using t-test. The participants were 62.8% male with mean age of 22 years. Treatment had significant effects on high-flyers pre-service teachers’ attitude (t = 17.44; df = 47, p < .5). Participants in CAI ( =71.03) had higher post attitude mean score compared to those in control ( = 64.92) groups. Gender had no significant effect on attitude (t= 3.06; df= 47, p > .5). The computer assisted instructional mode enhanced students’ attitude towards Agricultural Economics concepts. Therefore, CAI should be adopted for improved attitude towards agricultural economics concepts among high-flyers pre-service teachers.

Keywords: attitude towards agricultural economics concepts, colleges of education in southwest Nigeria, computer-assisted instruction, high-flyers pre-service teachers

Procedia PDF Downloads 249
893 Two-Sided Information Dissemination in Takeovers: Disclosure and Media

Authors: Eda Orhun

Abstract:

Purpose: This paper analyzes a target firm’s decision to voluntarily disclose information during a takeover event and the effect of such disclosures on the outcome of the takeover. Such voluntary disclosures especially in the form of earnings forecasts made around takeover events may affect shareholders’ decisions about the target firm’s value and in return takeover result. This study aims to shed light on this question. Design/methodology/approach: The paper tries to understand the role of voluntary disclosures by target firms during a takeover event in the likelihood of takeover success both theoretically and empirically. A game-theoretical model is set up to analyze the voluntary disclosure decision of a target firm to inform the shareholders about its real worth. The empirical implication of model is tested by employing binary outcome models where the disclosure variable is obtained by identifying the target firms in the sample that provide positive news by issuing increasing management earnings forecasts. Findings: The model predicts that a voluntary disclosure of positive information by the target decreases the likelihood that the takeover succeeds. The empirical analysis confirms this prediction by showing that positive earnings forecasts by target firms during takeover events increase the probability of takeover failure. Overall, it is shown that information dissemination through voluntary disclosures by target firms is an important factor affecting takeover outcomes. Originality/Value: This study is the first to the author's knowledge that studies the impact of voluntary disclosures by the target firm during a takeover event on the likelihood of takeover success. The results contribute to information economics, corporate finance and M&As literatures.

Keywords: takeovers, target firm, voluntary disclosures, earnings forecasts, takeover success

Procedia PDF Downloads 318
892 “Referral for re-submission” – The Case of EFL Applied Linguistics Doctoral Defense Sessions

Authors: Alireza Jalilifar, Nadia Mayahi

Abstract:

An oral defense is the examination of a doctoral program in which the candidates display their academic capacity through sharing and disseminating the findings of their study and defending their position. In this challenging criticism-generating context, the examiners evaluate the PhD dissertation critically so as to confirm its scholarly merit or lack of it. To identify the examiners’ expectations of the viva, this study used a conversation analytic approach for analyzing the data. The research is inductive in that it seeks to develop theory that is grounded in the data. The data comprised transcripts of the question and answer section of two applied linguistics doctoral defense sessions from two accredited Iranian state universities in 2019, both of which are among the top Iranian universities on the list of Times Higher Education World University Rankings. In spite of the similar shortcomings and deficiencies, for instance, in terms of innovation, development, sampling, and treatment, raised by the examiners, one of these defenses passed with distinction while the other was referred for re-submission. It seems that the outcome of a viva, in an EFL context, not only depends on adherence to the rules and regulations of doctoral research but is also influenced to a certain extent by the strictness of the examiners and the candidates’ language proficiency and effective negotiation and communication skills in this confrontational communicative event. The findings of this study provide evidence for the issues determining the success or failure of PhD candidates in displaying their claims of scholarship during their defense sessions. This study has implications for both applied linguistics doctoral students and academics in EFL contexts who try to prove and authenticate the doctorateness of a dissertation.

Keywords: academic discourse, conversation analysis, doctoral defense, doctorateness, EFL

Procedia PDF Downloads 156
891 From Division to Diversity: A Post Partition Study Exploring Identity and Culture in the Selected Works of Amitav Ghosh and Bapsi Sidhwa

Authors: Akanksha Dogra, Abhilasha Singh

Abstract:

This paper revolves around the cultural complexities, cultural similarities, and national sentiments of the contemporary period. It deals with the idea of cultural hybridization and the failure of socio-psychological and cultural boundaries to include all the members of society. The writers like Amitav Ghosh and Bapsi Sidhwa have a significant mark to cultural imperialism and diversity which lead to fluid identity in the present society. The paper invokes that partition could have been a solution to social and religious homogeneity. As writers like Amitav Ghosh and Bapsi Sidhwa focus on historical fiction, as they do not indulge in border activities rather exhibits complex cultural complexities. In terms of identity they believe that it is constructed and fragmented which is further shaped by colonialism and displacement. They reflect on culture in relation to the disruptions and transformations experienced by communities. Further, the division not only led to the creation of national boundaries but forced individuals to form identities based on religion. The paper aims at analyzing the contemporary scenario and comprehending the multiplicity of cross-cultural interactions leading to convolutions. It fathoms cultural and political complexities as a result of nation and nation-building as a part of collective consciousness. The paper limits itself by comprehending Amitav Ghosh’s work The Shadow Lines and The Hungry Tide and Bapsi Sidhwa’s work Pakistani Bride and The Crow Eaters through the study of Homi K. Bhabha’s The Location of Culture and Edward Said’s Culture and Imperialism.

Keywords: culture, imperialism, cultural hybridization, nation-building, hybridity

Procedia PDF Downloads 6