Search results for: time series representation
1909 Effect of Manure Treatment on Furrow Erosion: A Case Study of Sagawika Irrigation Scheme in Kasungu, Malawi
Authors: Abel Mahowe
Abstract:
Furrow erosion is the major problem menacing sustainability of irrigation in Malawi and polluting water bodies resulting in death of many aquatic animals. Many rivers in Malawi are drying due to some poor practices that are being practiced around these water bodies, furrow erosion is one of the cause of sedimentation in these rivers although it has gradual effect on deteriorating of these rivers hence neglected, but has got long term disastrous effect on water bodies. Many aquatic animals also suffer when these sediments are taken into these water bodies. An assessment of effect of manure treatment on furrow erosion was carried out in Sagawika irrigation scheme located in Kasungu District north part of Malawi. The soil on the field was clay loam and had just been tilled. The average furrow slope of 0.2% and was divided into two blocks, A and B. Each block had 20V-shaped furrow having a length of 10 m. Three different manure were used to construct these furrows by mixing it with soil which was moderately moist and 5 furrows from each block were constructed without manure. In each block 5furrow were made using a specific type of manure, and one set of five furrows in each block was made without manure treatment. The types of manure that were used were goat manure, pig manure, and manure from crop residuals. The manure application late was 5 kg/m. The furrow was constructed at a spacing of 0.6 m. Tomato was planted in the two blocks at spacing of 0.15 m between rows and 0.15 m between planting stations. Irrigation water was led from feeder canal into the irrigation furrows using siphons. The siphons discharge into each furrow was set at 1.86 L/S. The ¾ rule was used to determine the cut-off time for the irrigation cycles in order to reduce the run-off at the tail end. During each irrigation cycle, samples of the runoff water were collected at one-minute intervals and analyzed for total sediment concentration for use in estimating the total soil sediment loss. The results of the study have shown that a significant amount of soil is lost in soils without many organic matters, there was a low level of erosion in furrows that were constructed using manure treatment within the blocks. In addition, the results have shown that manure also differs in their ability to control erosion since pig manure proved to have greater abilities in binding the soil together than other manure since they were reduction in the amount of sediments at the tail end of furrows constructed by this type of manure. The results prove that manure contains organic matters which helps soil particles to bind together hence resisting the erosive force of water. The use of manure when constructing furrows in soil with less organic matter can highly reduce erosion hence reducing also pollution of water bodies and improve the conditions of aquatic animals.Keywords: aquatic, erosion, furrow, soil
Procedia PDF Downloads 2871908 A Novel Harmonic Compensation Algorithm for High Speed Drives
Authors: Lakdar Sadi-Haddad
Abstract:
The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.Keywords: active harmonic compensation, eddy current losses, high speed machine
Procedia PDF Downloads 3951907 Best Practice for Post-Operative Surgical Site Infection Prevention
Authors: Scott Cavinder
Abstract:
Surgical site infections (SSI) are a known complication to any surgical procedure and are one of the most common nosocomial infections. Globally it is estimated 300 million surgical procedures take place annually, with an incidence of SSI’s estimated to be 11 of 100 surgical patients developing an infection within 30 days after surgery. The specific purpose of the project is to address the PICOT (Problem, Intervention, Comparison, Outcome, Time) question: In patients who have undergone cardiothoracic or vascular surgery (P), does implementation of a post-operative care bundle based on current EBP (I) as compared to current clinical agency practice standards (C) result in a decrease of SSI (O) over a 12-week period (T)? Synthesis of Supporting Evidence: A literature search of five databases, including citation chasing, was performed, which yielded fourteen pieces of evidence ranging from high to good quality. Four common themes were identified for the prevention of SSI’s including use and removal of surgical dressings; use of topical antibiotics and antiseptics; implementation of evidence-based care bundles, and implementation of surveillance through auditing and feedback. The Iowa Model was selected as the framework to help guide this project as it is a multiphase change process which encourages clinicians to recognize opportunities for improvement in healthcare practice. Practice/Implementation: The process for this project will include recruiting postsurgical participants who have undergone cardiovascular or thoracic surgery prior to discharge at a Northwest Indiana Hospital. The patients will receive education, verbal instruction, and return demonstration. The patients will be followed for 12 weeks, and wounds assessed utilizing the National Healthcare Safety Network//Centers for Disease Control (NHSN/CDC) assessment tool and compared to the SSI rate of 2021. Key stakeholders will include two cardiovascular surgeons, four physician assistants, two advance practice nurses, medical assistant and patients. Method of Evaluation: Chi Square analysis will be utilized to establish statistical significance and similarities between the two groups. Main Results/Outcomes: The proposed outcome is the prevention of SSIs in the post-op cardiothoracic and vascular patient. Implication/Recommendation(s): Implementation of standardized post operative care bundles in the prevention of SSI in cardiovascular and thoracic surgical patients.Keywords: cardiovascular, evidence based practice, infection, post-operative, prevention, thoracic, surgery
Procedia PDF Downloads 831906 Impact of Pandemics on Cities and Societies
Authors: Deepak Jugran
Abstract:
Purpose: The purpose of this study is to identify how past Pandemics shaped social evolution and cities. Methodology: A historical and comparative analysis of major historical pandemics in human history their origin, transmission route, biological response and the aftereffects. A Comprehensive pre & post pandemic scenario and focuses selectively on major issues and pandemics that have deepest & lasting impact on society with available secondary data. Results: Past pandemics shaped the behavior of human societies and their cities and made them more resilient biologically, intellectually & socially endorsing the theory of “Survival of the fittest” by Sir Charles Darwin. Pandemics & Infectious diseases are here to stay and as a human society, we need to strengthen our collective response & preparedness besides evolving mechanisms for strict controls on inter-continental movements of people, & especially animals who become carriers for these viruses. Conclusion: Pandemics always resulted in great mortality, but they also improved the overall individual human immunology & collective social response; at the same time, they also improved the public health system of cities, health delivery systems, water, sewage distribution system, institutionalized various welfare reforms and overall collective social response by the societies. It made human beings more resilient biologically, intellectually, and socially hence endorsing the theory of “AGIL” by Prof Talcott Parsons. Pandemics & infectious diseases are here to stay and as humans, we need to strengthen our city response & preparedness besides evolving mechanisms for strict controls on inter-continental movements of people, especially animals who always acted as carriers for these novel viruses. Pandemics over the years acted like natural storms, mitigated the prevailing social imbalances and laid the foundation for scientific discoveries. We understand that post-Covid-19, institutionalized city, state and national mechanisms will get strengthened and the recommendations issued by the various expert groups which were ignored earlier will now be implemented for reliable anticipation, better preparedness & help to minimize the impact of Pandemics. Our analysis does not intend to present chronological findings of pandemics but rather focuses selectively on major pandemics in history, their causes and how they wiped out an entire city’s population and influenced the societies, their behavior and facilitated social evolution.Keywords: pandemics, Covid-19, social evolution, cities
Procedia PDF Downloads 1141905 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR
Authors: Ivana Scidà, Francesco Alotto, Anna Osello
Abstract:
Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.Keywords: building information modelling, digital learning, education, virtual laboratory, virtual reality
Procedia PDF Downloads 1311904 Factors Influencing Capital Structure: Evidence from the Oil and Gas Industry of Pakistan
Authors: Muhammad Tahir, Mushtaq Muhammad
Abstract:
Capital structure is one of the key decisions taken by the financial managers. This study aims to investigate the factors influencing capital structure decision in Oil and Gas industry of Pakistan using secondary data from published annual reports of listed Oil and Gas Companies of Pakistan. This study covers the time-period from 2008-2014. Capital structure can be affected by profitability, firm size, growth opportunities, dividend payout, liquidity, business risk, and ownership structure. Panel data technique with Ordinary least square (OLS) regression model has been used to find the impact of set of explanatory variables on the capital structure using the Stata. OLS regression results suggest that dividend payout, firm size and government ownership have the most significant impact on financial leverage. Dividend payout and government ownership are found to have significant negative association with financial leverage however firm size indicated positive relationship with financial leverage. Other variables having significant link with financial leverage includes growth opportunities, liquidity and business risk. Results reveal significant positive association between growth opportunities and financial leverage whereas liquidity and business risk are negatively correlated with financial leverage. Profitability and managerial ownership exhibited insignificant relationship with financial leverage. This study contributes to existing Managerial Finance literature with certain managerial implications. Academically, this research study describes the factors affecting capital structure decision of Oil and Gas Companies in Pakistan and adds latest empirical evidence to existing financial literature in Pakistan. Researchers have studies capital structure in Pakistan in general and industry at specific, nevertheless still there is limited literature on this issue. This study will be an attempt to fill this gap in the academic literature. This study has practical implication on both firm level and individual investor/ lenders level. Results of this study can be useful for investors/ lenders in making investment and lending decisions. Further, results of this study can be useful for financial managers to frame optimal capital structure keeping in consideration the factors that can affect capital structure decision as revealed by this study. These results will help financial managers to decide whether to issue stock or issue debt for future investment projects.Keywords: capital structure, multicollinearity, ordinary least square (OLS), panel data
Procedia PDF Downloads 2951903 Examining Terrorism through a Constructivist Framework: Case Study of the Islamic State
Authors: Shivani Yadav
Abstract:
The Study of terrorism lends itself to the constructivist framework as constructivism focuses on the importance of ideas and norms in shaping interests and identities. Constructivism is pertinent to understand the phenomenon of a terrorist organization like the Islamic State (IS), which opportunistically utilizes radical ideas and norms to shape its ‘politics of identity’. This ‘identity’, which is at the helm of preferences and interests of actors, in turn, shapes actions. The paper argues that an effective counter-terrorism policy must recognize the importance of ideas in order to counter the threat arising from acts of radicalism and terrorism. Traditional theories of international relations, with an emphasis on state-centric security problematic, exhibit several limitations and problems in interpreting the phenomena of terrorism. With the changing global order, these theories have failed to adapt to the changing dimensions of terrorism, especially ‘newer’ actors like the Islamic State (IS). The paper observes that IS distinguishes itself from other terrorist organizations in the way that it recruits and spreads its propaganda. Not only are its methods different, but also its tools (like social media) are new. Traditionally, too, force alone has rarely been sufficient to counter terrorism, but it seems especially impossible to completely root out an organization like IS. Time is ripe to change the discourse around terrorism and counter-terrorism strategies. The counter-terrorism measures adopted by states, which primarily focus on mitigating threats to the national security of the state, are preoccupied with statist objectives of the continuance of state institutions and maintenance of order. This limitation prevents these theories from addressing the questions of justice and the ‘human’ aspects of ideas and identity. These counter-terrorism strategies adopt a problem-solving approach that attempts to treat the symptoms without diagnosing the disease. Hence, these restrictive strategies fail to look beyond calculated retaliation against violent actions in order to address the underlying causes of discontent pertaining to ‘why’ actors turn violent in the first place. What traditional theories also overlook is that overt acts of violence may have several causal factors behind them, some of which are rooted in the structural state system. Exploring these root causes through the constructivist framework helps to decipher the process of ‘construction of terror’ and to move beyond the ‘what’ in theorization in order to describe ‘why’, ‘how’ and ‘when’ terrorism occurs. Study of terrorism would much benefit from a constructivist analysis in order to explore non-military options while countering the ideology propagated by the IS.Keywords: constructivism, counter terrorism, Islamic State, politics of identity
Procedia PDF Downloads 1891902 Development of a Stable RNAi-Based Biological Control for Sheep Blowfly Using Bentonite Polymer Technology
Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody
Abstract:
Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls and the parasite has developed resistance to nearly all control chemicals used in the past. It is therefore critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.Keywords: flystrike, RNA interference, bentonite polymer technology, Lucillia cuprina
Procedia PDF Downloads 921901 Transition from Linear to Circular Business Models with Service Design Methodology
Authors: Minna-Maari Harmaala, Hanna Harilainen
Abstract:
Estimates of the economic value of transitioning to circular economy models vary but it has been estimated to represent $1 trillion worth of new business into the global economy. In Europe alone, estimates claim that adopting circular-economy principles could not only have environmental and social benefits but also generate a net economic benefit of €1.8 trillion by 2030. Proponents of a circular economy argue that it offers a major opportunity to increase resource productivity, decrease resource dependence and waste, and increase employment and growth. A circular system could improve competitiveness and unleash innovation. Yet, most companies are not capturing these opportunities and thus the even abundant circular opportunities remain uncaptured even though they would seem inherently profitable. Service design in broad terms relates to developing an existing or a new service or service concept with emphasis and focus on the customer experience from the onset of the development process. Service design may even mean starting from scratch and co-creating the service concept entirely with the help of customer involvement. Service design methodologies provide a structured way of incorporating customer understanding and involvement in the process of designing better services with better resonance to customer needs. A business model is a depiction of how the company creates, delivers, and captures value; i.e. how it organizes its business. The process of business model development and adjustment or modification is also called business model innovation. Innovating business models has become a part of business strategy. Our hypothesis is that in addition to linear models still being easier to adopt and often with lower threshold costs, companies lack an understanding of how circular models can be adopted into their business and how customers will be willing and ready to adopt the new circular business models. In our research, we use robust service design methodology to develop circular economy solutions with two case study companies. The aim of the process is to not only develop the service concepts and portfolio, but to demonstrate the willingness to adopt circular solutions exists in the customer base. In addition to service design, we employ business model innovation methods to develop, test, and validate the new circular business models further. The results clearly indicate that amongst the customer groups there are specific customer personas that are willing to adopt and in fact are expecting the companies to take a leading role in the transition towards a circular economy. At the same time, there is a group of indifferents, to whom the idea of circularity provides no added value. In addition, the case studies clearly show what changes adoption of circular economy principles brings to the existing business model and how they can be integrated.Keywords: business model innovation, circular economy, circular economy business models, service design
Procedia PDF Downloads 1361900 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter
Authors: Zhu Xinxin, Wang Hui, Yang Kai
Abstract:
Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter
Procedia PDF Downloads 1181899 A Comparative Study to Evaluate Changes in Intraocular Pressure with Thiopentone Sodium and Etomidate in Patients Undergoing Surgery for Traumatic Brain Injury
Authors: Vasudha Govil, Prashant Kumar, Ishwar Singh, Kiranpreet Kaur
Abstract:
Traumatic brain injury leads to elevated intracranial pressure. Intraocular pressure (IOP) may also be affected by intracranial pressure. Increased venous pressure in the cavernous sinus is transmitted to the episcleral veins, resulting in an increase in IOP. All drugs used in anesthesia induction can change IOP. Irritation of the gag reflex after usage of the endotracheal tube can also increase IOP; therefore, the administration of anesthetic drugs, which make the lowest change in IOP, is important, while cardiovascular depression must also be avoided. Thiopentone decreases IOP by 40%, whereas etomidate decreases IOP by 30-60% for up to 5 minutes. Hundred patients (age 18-55 years) who underwent emergency craniotomy for TBI are selected for the study. Patients are randomly assigned to two groups of 50 patients each accord¬ing to the drugs used for induction: group T was given thiopentone sodium (5mg kg-1) and group E was given etomi¬date (0.3mg kg-1). Preanaesthesia intraocular pressure (IOP) was measured using Schiotz tonometer. Induction of anesthesia was achieved with etomidate (0.3mg kg-1) or thiopentone (5mg kg-1) along with fentanyl (2 mcg kg-1). Intravenous rocuronium (0.9mg kg-1) was given to facilitate intubation. Intraocular pressure was measured after 1 minute of induction agent administration and 5 minutes after intubation. Maintainance of anesthesia was done with isoflurane in 50% nitrous oxide with fresh gas flow of 5 litres. At the end of the surgery, the residual neuromuscular block was reversed and the patient was shifted to ward/ICU. Patients in both groups were comparable in terms of demographic profile. There was no significant difference between the groups for the hemody¬namic and respiratory variables prior to thiopentone or etomidate administration. Intraocular pressure in thiopentone group in left eye and right eye before induction was 14.97±3.94 mmHg and 14.72±3.75 mmHg respectively and for etomidate group was 15.28±3.69 mmHg and 15.54±4.46 mmHg respectively. After induction IOP decreased significantly in both the eyes (p<0.001) in both the groups. After 5 min of intubation IOP was significantly less than the baseline in both the eyes but it was more than the IOP after induction with the drug. It was found that there was no statistically significant difference in IOP between the two groups at any point of time. Both the drugs caused a significant decrease in IOP after induction and after 5 minutes of endotracheal intubation. The mechanism of decrease in IOP by intravenous induction agents is debatable. Systemic hypotension after the induction of anaesthesia has been shown to cause a decrease in intra-ocular pressure. A decrease in the tone of the extra-ocular muscles can also result in a decrease in intra-ocular pressure. We observed that it is appropriate to use etomidate as an induction agent when elevation of intra-ocular pressure is undesirable owing to the cardiovascular stability it confers in the patients.Keywords: etomidate, intraocular pressure, thiopentone, traumatic
Procedia PDF Downloads 1261898 Long Term Survival after a First Transient Ischemic Attack in England: A Case-Control Study
Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski
Abstract:
Transient ischaemic attacks (TIAs) are warning signs for future strokes. TIA patients are at increased risk of stroke and cardio-vascular events after a first episode. A majority of studies on TIA focused on the occurrence of these ancillary events after a TIA. Long-term mortality after TIA received only limited attention. We undertook this study to determine the long-term hazards of all-cause mortality following a first episode of a TIA using anonymised electronic health records (EHRs). We used a retrospective case-control study using electronic primary health care records from The Health Improvement Network (THIN) database. Patients born prior to or in year 1960, resident in England, with a first diagnosis of TIA between January 1986 and January 2017 were matched to three controls on age, sex and general medical practice. The primary outcome was all-cause mortality. The hazards of all-cause mortality were estimated using a time-varying Weibull-Cox survival model which included both scale and shape effects and a random frailty effect of GP practice. 20,633 cases and 58,634 controls were included. Cases aged 39 to 60 years at the first TIA event had the highest hazard ratio (HR) of mortality compared to matched controls (HR = 3.04, 95% CI (2.91 - 3.18)). The HRs for cases aged 61-70 years, 71-76 years and 77+ years were 1.98 (1.55 - 2.30), 1.79 (1.20 - 2.07) and 1.52 (1.15 - 1.97) compared to matched controls. Aspirin provided long-term survival benefits to cases. Cases aged 39-60 years on aspirin had HR of 0.93 (0.84 - 1.00), 0.90 (0.82 - 0.98) and 0.88 (0.80 - 0.96) at 5 years, 10 years and 15 years, respectively, compared to cases in the same age group who were not on antiplatelets. Similar beneficial effects of aspirin were observed in other age groups. There were no significant survival benefits with other antiplatelet options. No survival benefits of antiplatelet drugs were observed in controls. Our study highlights the excess long-term risk of death of TIA patients and cautions that TIA should not be treated as a benign condition. The study further recommends aspirin as the better option for secondary prevention for TIA patients compared to clopidogrel recommended by NICE guidelines. Management of risk factors and treatment strategies should be important challenges to reduce the burden of disease.Keywords: dual antiplatelet therapy (DAPT), General Practice, Multiple Imputation, The Health Improvement Network(THIN), hazard ratio (HR), Weibull-Cox model
Procedia PDF Downloads 1501897 Governance Models of Higher Education Institutions
Authors: Zoran Barac, Maja Martinovic
Abstract:
Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.Keywords: governance, governance models, higher education institutions, institutional context, situational context
Procedia PDF Downloads 3371896 Community Based Psychosocial Intervention Reduces Maternal Depression and Infant Development in Bangladesh
Authors: S. Yesmin, N. F.Rahman, R. Akther, T. Begum, T. Tahmid, T. Chowdury, S. Afrin, J. D. Hamadani
Abstract:
Abstract: Maternal depression is one of the risk factors of developmental delay in young children in low-income countries. Maternal depressions during pregnancy are rarely reported in Bangladesh. Objectives: The purpose of the present study was to examine the efficacy of a community based psychosocial intervention on women with mild to moderate depressive illness during the perinatal period and on their children from birth to 12 months on mothers’ mental status and their infants’ growth and development. Methodology: The study followed a prospective longitudinal approach with a randomized controlled design. Total 250 pregnant women aged between 15 and 40 years were enrolled in their third trimester of pregnancy of which 125 women were in the intervention group and 125 in the control group. Women in the intervention group received the “Thinking Healthy (CBT based) program” at their home setting, from their last month of pregnancy till 10 months after delivery. Their children received psychosocial stimulation from birth till 12 months. The following instruments were applied to get the outcome information- Bangla version of Edinburgh Postnatal Depression Scale (BEPDS), Prenatal Attachment Inventory (PAI), Maternal Attachment Inventory (MAI), Bayley Scale of Infant Development-Third version (Bayley–III) and Family Care Indicator (FCI). In addition, sever morbidity; breastfeeding, immunization, socio-economic and demographic information were collected. Data were collected at three time points viz. baseline, midline (6 months after delivery) and endline (12 months after delivery). Results: There was no significant difference between any of the socioeconomic and demographic variables at baseline. A very preliminary analysis of the data shows an intervention effect on Socioemotional behaviour of children at endline (p<0.001), motor development at midline (p=0.016) and at endline (p=0.065), language development at midline (p=0.004) and at endline (p=0.023), cognitive development at midline (p=0.008) and at endline (p=0.002), and quality of psychosocial stimulation at midline (p=0.023) and at endline (p=0.010). EPDS at baseline was not different between the groups (p=0.419), but there was a significant improvement at midline (p=0.027) and at endline (p=0.024) between the groups following the intervention. Conclusion: Psychosocial intervention is found effective in reducing women’s low and moderate depressive illness to cope with mental health problem and improving development of young children in Bangladesh.Keywords: mental health, maternal depression, infant development, CBT, EPDS
Procedia PDF Downloads 2751895 Evaluating Value of Users' Personal Information Based on Cost-Benefit Analysis
Authors: Jae Hyun Park, Sangmi Chai, Minkyun Kim
Abstract:
As users spend more time on the Internet, the probability of their personal information being exposed has been growing. This research has a main purpose of investigating factors and examining relationships when Internet users recognize their value of private information with a perspective of an economic asset. The study is targeted on Internet users, and the value of their private information will be converted into economic figures. Moreover, how economic value changes in relation with individual attributes, dealer’s traits, circumstantial properties will be studied. In this research, the changes in factors on private information value responding to different situations will be analyzed in an economic perspective. Additionally, this study examines the associations between users’ perceived risk and value of their personal information. By using the cost-benefit analysis framework, the hypothesis that the user’s sense in private information value can be influenced by individual attributes and situational properties will be tested. Therefore, this research will attempt to provide answers for three research objectives. First, this research will identify factors that affect value recognition of users’ personal information. Second, it provides evidences that there are differences on information system users’ economic value of information responding to personal, trade opponent, and situational attributes. Third, it investigates the impact of those attributes on individuals’ perceived risk. Based on the assumption that personal, trade opponent and situation attributes make an impact on the users’ value recognition on private information, this research will present the understandings on the different impacts of those attributes in recognizing the value of information with the economic perspective and prove the associative relationships between perceived risk and decision on the value of users’ personal information. In order to validate our research model, this research used the regression methodology. Our research results support that information breach experience and information security systems is associated with users’ perceived risk. Information control and uncertainty are also related to users’ perceived risk. Therefore, users’ perceived risk is considered as a significant factor on evaluating the value of personal information. It can be differentiated by trade opponent and situational attributes. This research presents new perspective on evaluating the value of users’ personal information in the context of perceived risk, personal, trade opponent and situational attributes. It fills the gap in the literature by providing how users’ perceived risk are associated with personal, trade opponent and situation attitudes in conducting business transactions with providing personal information. It adds to previous literature that the relationship exists between perceived risk and the value of users’ private information in the economic perspective. It also provides meaningful insights to the managers that in order to minimize the cost of information breach, managers need to recognize the value of individuals’ personal information and decide the proper amount of investments on protecting users’ online information privacy.Keywords: private information, value, users, perceived risk, online information privacy, attributes
Procedia PDF Downloads 2391894 Comparison the Effectiveness of Pain Cognitive- Behavioral Therapy and Its Computerized Version on Reduction of Pain Intensity, Depression, Anger and Anxiety in Children with Cancer: A Randomized Controlled Trial
Authors: Najmeh Hamid, Vajiheh Hamedy , Zahra Rostamianasl
Abstract:
Background: Cancer is one of the medical problems that have been associated with pain. Moreover, the pain is combined with negative emotions such as anxiety, depression and anger. Poor pain management causes negative effects on the quality of life, which results in negative effects that continue a long time after the painful experiences. Objectives: The aim of this research was to compare the effectiveness of Common Cognitive Behavioral Therapy for Pain and its computerized version on the reduction of pain intensity, depression, anger and anxiety in children with cancer. Methods: The research method of this “Randomized Controlled Clinical Trial” was a pre, post-test and follow-up with a control group. In this research, we have examined the effectiveness of Common Cognitive Behavioral Therapy for Pain and its computerized version on the reduction of pain intensity, anxiety, depression and anger in children with cancer in Ahvaz. Two psychological interventions (cognitive behavioral therapy for pain and the computerized version) were compared with the control group. The sample consisted of 60 children aged 8 to 12 years old with different types of cancer at Shafa hospital in Ahwaz. According to the including and excluding criteria such as age, socioeconomic status, clinical diagnostic interview and other criteria, 60 subjects were selected. Then, randomly, 45 subjects were selected. The subjects were randomly divided into three groups of 15 (two experimental and one control group). The research instruments included Spielberger Anxiety Inventory (STAY-2) and International Pain Measurement Scale. The first experimental group received 6 sessions of cognitive-behavioral therapy for 6 weeks, and the second group was subjected to a computerized version of cognitive-behavioral therapy for 6 weeks, but the control group did not receive any interventions. For ethical considerations, a version of computerized cognitive-behavioral therapy was provided to them. After 6 weeks, all three groups were evaluated as post-test and eventually after a one-month follow-up. Results: The findings of this study indicated that both interventions could reduce the negative emotions (pain, anger, anxiety, depression) associated with cancer in children in comparison with a control group (p<0.0001). In addition, there were no significant differences between the two interventions (p<0.01). It means both interventions are useful for reducing the negative effects of pain and enhancing adjustment. Conclusion: we can use CBT in situations in which there is no access to psychologists and psychological services. In addition, it can be a useful alternative to conventional psychological interventions.Keywords: pain, children, psychological intervention, cancer, anger, anxiety, depression
Procedia PDF Downloads 801893 Nursing Professionals’ Perception of the Work Environment, Safety Climate and Job Satisfaction in the Brazilian Hospitals during the COVID-19 Pandemic
Authors: Ana Claudia de Souza Costa, Beatriz de Cássia Pinheiro Goulart, Karine de Cássia Cavalari, Henrique Ceretta Oliveira, Edineis de Brito Guirardello
Abstract:
Background: During the COVID-19 pandemic, nursing represents the largest category of health professionals who were on the front line. Thus, investigating the practice environment and the job satisfaction of nursing professionals during the pandemic becomes fundamental since it reflects on the quality of care and the safety climate. The aim of this study was to evaluate and compare the nursing professionals' perception of the work environment, job satisfaction, and safety climate of the different hospitals and work shifts during the COVID-19 pandemic. Method: This is a cross-sectional survey with 130 nursing professionals from public, private and mixed hospitals in Brazil. For data collection, was used an electronic form containing the personal and occupational variables, work environment, job satisfaction, and safety climate. The data were analyzed using descriptive statistics and ANOVA or Kruskal-Wallis tests according to the data distribution. The distribution was evaluated by means of the Shapiro-Wilk test. The analysis was done in the SPSS 23 software, and it was considered a significance level of 5%. Results: The mean age of the participants was 35 years (±9.8), with a mean time of 6.4 years (±6.7) of working experience in the institution. Overall, the nursing professionals evaluated the work environment as favorable; they were dissatisfied with their job in terms of pay, promotion, benefits, contingent rewards, operating procedures and satisfied with coworkers, nature of work, supervision, and communication, and had a negative perception of the safety climate. When comparing the hospitals, it was found that they did not differ in their perception of the work environment and safety climate. However, they differed with regard to job satisfaction, demonstrating that nursing professionals from public hospitals were more dissatisfied with their work with regard to promotion when compared to professionals from private (p=0.02) and mixed hospitals (p< 0.01) and nursing professionals from mixed hospitals were more satisfied than those from private hospitals (p= 0.04) with regard to supervision. Participants working in night shifts had the worst perception of the work environment related to nurse participation in hospital affairs (p= 0.02), nursing foundations for quality care (p= 0.01), nurse manager ability, leadership and support (p= 0.02), safety climate (p< 0.01), job satisfaction related to contingent rewards (p= 0.04), nature of work (p= 0.03) and supervision (p< 0.01). Conclusion: The nursing professionals had a favorable perception of the environment and safety climate but differed among hospitals regarding job satisfaction for the promotion and supervision domains. There was also a difference between the participants regarding the work shifts, being the night shifts, those with the lowest scores, except for satisfaction with operational conditions.Keywords: health facility environment, job satisfaction, patient safety, nursing
Procedia PDF Downloads 1581892 Effect of the Orifice Plate Specifications on Coefficient of Discharge
Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer
Abstract:
On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications
Procedia PDF Downloads 1191891 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 2501890 Juvenile Fish Associated with Pondweed and Charophyte Habitat: A Case Study Using Upgraded Pop-up Net in the Estuarine Part of the Curonian Lagoon
Authors: M. Bučas, A. Skersonas, E. Ivanauskas, J. Lesutienė, N. Nika, G. Srėbalienė, E. Tiškus, J. Gintauskas, A.Šaškov, G. Martin
Abstract:
Submerged vegetation enhances heterogeneity of sublittoral habitats; therefore, macrophyte stands are essential elements of aquatic ecosystems to maintain a diverse fish fauna. Fish-habitat relations have been extensively studied in streams and coastal waters, but in lakes and estuaries are still underestimated. The aim of this study is to assess temporal (diurnal and seasonal) patterns of fish juvenile assemblages associated with common submerged macrophyte habitats, which have significantly spread during the recent decade in the upper littoral part of the Curonian Lagoon. The assessment was performed by means of an upgraded pop-up net approach resulting in much precise sampling versus other techniques. The optimal number of samples (i.e., pop-up nets) required to cover>80% of the total number of fish species depended on the time of the day in both study sites: at least 7and 9 nets in the evening (18-24 pm) in the Southern and Northern study sites, respectively. In total, 14 fish species were recorded, where perch and roach dominated (respectively 48% and 24%). From multivariate analysis, water salinity and seasonality (temperature or sampling month) were primary factors determining fish assemblage composition. The southern littoral area, less affected by brackish water conditions, hosted a higher number of species (13) than in the Northern site (8). In the latter site, brackish water tolerant species (three-spined and nine-spined sticklebacks, spiny loach, roach, and round goby) were more abundant than in the Southern site. Perch and ruffe dominated in the Southern site. Spiny loach and nine-spined stickleback were more frequent in September, while ruffe, perch, and roach occurred more in July. The diel dynamics of the common species such as perch, roach, and ruffe followed the general pattern, but it was species specific and depended on the study site, habitat, and month. The species composition between macrophyte habitats did not significantly differ; however, it differed from the results obtained in 2005 at both study sites indicating the importance of expanded charophyte stands during the last decade in the littoral zone.Keywords: diel dynamics, charophytes, pondweeds, herbivorous and benthivorous fishes, littoral, nursery habitat, shelter
Procedia PDF Downloads 1891889 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers
Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha
Abstract:
Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer
Procedia PDF Downloads 1661888 Economic Impact of Drought on Agricultural Society: Evidence Based on a Village Study in Maharashtra, India
Authors: Harshan Tee Pee
Abstract:
Climate elements include surface temperatures, rainfall patterns, humidity, type and amount of cloudiness, air pressure and wind speed and direction. Change in one element can have an impact on the regional climate. The scientific predictions indicate that global climate change will increase the number of extreme events, leading to more frequent natural hazards. Global warming is likely to intensify the risk of drought in certain parts and also leading to increased rainfall in some other parts. Drought is a slow advancing disaster and creeping phenomenon– which accumulate slowly over a long period of time. Droughts are naturally linked with aridity. But droughts occur over most parts of the world (both wet and humid regions) and create severe impacts on agriculture, basic household welfare and ecosystems. Drought condition occurs at least every three years in India. India is one among the most vulnerable drought prone countries in the world. The economic impacts resulting from extreme environmental events and disasters are huge as a result of disruption in many economic activities. The focus of this paper is to develop a comprehensive understanding about the distributional impacts of disaster, especially impact of drought on agricultural production and income through a panel study (drought year and one year after the drought) in Raikhel village, Maharashtra, India. The major findings of the study indicate that cultivating area as well as the number of cultivating households reduced after the drought, indicating a shift in the livelihood- households moved from agriculture to non-agriculture. Decline in the gross cropped area and production of various crops depended on the negative income from these crops in the previous agriculture season. All the landholding categories of households except landlords had negative income in the drought year and also the income disparities between the households were higher in that year. In the drought year, the cost of cultivation was higher for all the landholding categories due to the increased cost for irrigation and input cost. In the drought year, agriculture products (50 per cent of the total products) were used for household consumption rather than selling in the market. It is evident from the study that livelihood which was based on natural resources became less attractive to the people to due to the risk involved in it and people were moving to less risk livelihood for their sustenance.Keywords: climate change, drought, agriculture economics, disaster impact
Procedia PDF Downloads 1181887 In vitro Study of Inflammatory Gene Expression Suppression of Strawberry and Blackberry Extracts
Authors: Franco Van De Velde, Debora Esposito, Maria E. Pirovani, Mary A. Lila
Abstract:
The physiology of various inflammatory diseases is a complex process mediated by inflammatory and immune cells such as macrophages and monocytes. Chronic inflammation, as observed in many cardiovascular and autoimmune disorders, occurs when the low-grade inflammatory response fails to resolve with time. Because of the complexity of the chronic inflammatory disease, major efforts have focused on identifying novel anti-inflammatory agents and dietary regimes that prevent the pro-inflammatory process at the early stage of gene expression of key pro-inflammatory mediators and cytokines. The ability of the extracts of three blackberry cultivars (‘Jumbo’, ‘Black Satin’ and ‘Dirksen’), and one strawberry cultivar (‘Camarosa’) to inhibit four well-known genetic biomarkers of inflammation: inducible nitric oxide synthase (iNOS), cyclooxynase-2 (Cox-2), interleukin-1β (IL-1β) and interleukin-6 (IL-6) in an in vitro lipopolysaccharide-stimulated murine RAW 264.7 macrophage model were investigated. Moreover, the effect of latter extracts on the intracellular reactive oxygen species (ROS) and nitric oxide (NO) production was assessed. Assay was conducted with 50 µg/mL crude extract concentration, an amount that is easily achievable in the gastrointestinal tract after berries consumption. The mRNA expression levels of Cox-2 and IL-6 were reduced consistently (more than 30%) by extracts of ‘Jumbo’ and ‘Black Satin’ blackberries. Strawberry extracts showed high reduction in mRNA expression levels of IL-6 (more than 65%) and exhibited moderate reduction in mRNA expression of Cox-2 (more than 35%). The latter behavior mirrors the intracellular ROS production of the LPS stimulated RAW 264.7 macrophages after the treatment with blackberry ‘Black Satin’ and ‘Jumbo’, and strawberry ‘Camarosa’ extracts, suggesting that phytochemicals from these fruits may play a role in the health maintenance by reducing oxidative stress. On the other hand, effective inhibition in the gene expression of IL-1β and iNOS was not observed by any of blackberry and strawberry extracts. However, suppression in the NO production in the activated macrophages among 5–25% was observed by ‘Jumbo’ and ‘Black Satin’ blackberry extracts and ‘Camarosa’ strawberry extracts, suggesting a higher NO suppression property by phytochemicals of these fruits. All these results suggest the potential beneficial effects of studied berries as functional foods with antioxidant and anti-inflammatory roles. Moreover, the underlying role of phytochemicals from these fruits in the protection of inflammatory process will deserve to be further explored.Keywords: cyclooxygenase-2, functional foods, interleukin-6, reactive oxygen species
Procedia PDF Downloads 2401886 Basics for Corruption Reduction and Fraud Prevention in Industrial/Humanitarian Organizations through Supplier Management in Supply Chain Systems
Authors: Ibrahim Burki
Abstract:
Unfortunately, all organizations (Industrial and Humanitarian/ Non-governmental organizations) are prone to fraud and corruption in their supply chain management routines. The reputational and financial fallout can be disastrous. With the growing number of companies using suppliers based in the local market has certainly increased the threat of fraud as well as corruption. There are various potential threats like, poor or non-existent record keeping, purchasing of lower quality goods at higher price, excessive entertainment of staff by suppliers, deviations in communications between procurement staff and suppliers, such as calls or text messaging to mobile phones, staff demanding extended periods of notice before they allow an audit to take place, inexperienced buyers and more. But despite all the above-mentioned threats, this research paper emphasize upon the effectiveness of well-maintained vendor/s records and sorting/filtration of vendor/s to cut down the possible threats of corruption and fraud. This exercise is applied in a humanitarian organization of Pakistan but it is applicable to whole South Asia region due to the similarity of culture and contexts. In that firm, there were more than 550 (five hundred and fifty) registered vendors. As during the disasters or emergency phases requirements are met on urgent basis thus, providing golden opportunities for the fake companies or for the brother/sister companies of the already registered companies to be involved in the tendering process without declaration or even under some different (new) company’s name. Therefore, a list of required documents (along with checklist) was developed and sent to all of the vendor(s) in the current database and based upon the receipt of the requested documents vendors were sorted out. Furthermore, these vendors were divided into active (meeting the entire set criterion) and non-active groups. This initial filtration stage allowed the firm to continue its work without a complete shutdown that is only vendors falling in the active group shall be allowed to participate in the tenders by the time whole process is completed. Likewise only those companies or firms meeting the set criterion (active category) shall be allowed to get registered in the future along with a dedicated filing system (soft and hard shall be maintained), and all of the companies/firms in the active group shall be physically verified (visited) by the Committee comprising of senior members of at least Finance department, Supply Chain (other than procurement) and Security department.Keywords: corruption reduction, fraud prevention, supplier management, industrial/humanitarian organizations
Procedia PDF Downloads 5411885 The Ideal Memory Substitute for Computer Memory Hierarchy
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
Computer system components such as the CPU, the Controllers, and the operating system, work together as a team, and storage or memory is the essential parts of this team apart from the processor. The memory and storage system including processor caches, main memory, and storage, form basic storage component of a computer system. The characteristics of the different types of storage are inherent in the design and the technology employed in the manufacturing. These memory characteristics define the speed, compatibility, cost, volatility, and density of the various storage types. Most computers rely on a hierarchy of storage devices for performance. The effective and efficient use of the memory hierarchy of the computer system therefore is the single most important aspect of computer system design and use. The memory hierarchy is becoming a fundamental performance and energy bottleneck, due to the widening gap between the increasing demands of modern computer applications and the limited performance and energy efficiency provided by traditional memory technologies. With the dramatic development in the computers systems, computer storage has had a difficult time keeping up with the processor speed. Computer architects are therefore facing constant challenges in developing high-speed computer storage with high-performance which is energy-efficient, cost-effective and reliable, to intercept processor requests. It is very clear that substantial advancements in redesigning the existing memory physical and logical structures to meet up with the latest processor potential is crucial. This research work investigates the importance of computer memory (storage) hierarchy in the design of computer systems. The constituent storage types of the hierarchy today were investigated looking at the design technologies and how the technologies affect memory characteristics: speed, density, stability and cost. The investigation considered how these characteristics could best be harnessed for overall efficiency of the computer system. The research revealed that the best single type of storage, which we refer to as ideal memory is that logical single physical memory which would combine the best attributes of each memory type that make up the memory hierarchy. It is a single memory with access speed as high as one found in CPU registers, combined with the highest storage capacity, offering excellent stability in the presence or absence of power as found in the magnetic and optical disks as against volatile DRAM, and yet offers a cost-effective attribute that is far away from the expensive SRAM. The research work suggests that to overcome these barriers it may then mean that memory manufacturing will take a total deviation from the present technologies and adopt one that overcomes the associated challenges with the traditional memory technologies.Keywords: cache, memory-hierarchy, memory, registers, storage
Procedia PDF Downloads 1671884 Development of a Test Plant for Parabolic Trough Solar Collectors Characterization
Authors: Nelson Ponce Jr., Jonas R. Gazoli, Alessandro Sete, Roberto M. G. Velásquez, Valério L. Borges, Moacir A. S. de Andrade
Abstract:
The search for increased efficiency in generation systems has been of great importance in recent years to reduce the impact of greenhouse gas emissions and global warming. For clean energy sources, such as the generation systems that use concentrated solar power technology, this efficiency improvement impacts a lower investment per kW, improving the project’s viability. For the specific case of parabolic trough solar concentrators, their performance is strongly linked to their geometric precision of assembly and the individual efficiencies of their main components, such as parabolic mirrors and receiver tubes. Thus, for accurate efficiency analysis, it should be conducted empirically, looking for mounting and operating conditions like those observed in the field. The Brazilian power generation and distribution company Eletrobras Furnas, through the R&D program of the National Agency of Electrical Energy, has developed a plant for testing parabolic trough concentrators located in Aparecida de Goiânia, in the state of Goiás, Brazil. The main objective of this test plant is the characterization of the prototype concentrator that is being developed by the company itself in partnership with Eudora Energia, seeking to optimize it to obtain the same or better efficiency than the concentrators of this type already known commercially. This test plant is a closed pipe system where a pump circulates a heat transfer fluid, also calledHTF, in the concentrator that is being characterized. A flow meter and two temperature transmitters, installed at the inlet and outlet of the concentrator, record the parameters necessary to know the power absorbed by the system and then calculate its efficiency based on the direct solar irradiation available during the test period. After the HTF gains heat in the concentrator, it flows through heat exchangers that allow the acquired energy to be dissipated into the ambient. The goal is to keep the concentrator inlet temperature constant throughout the desired test period. The developed plant performs the tests in an autonomous way, where the operator must enter the HTF flow rate in the control system, the desired concentrator inlet temperature, and the test time. This paper presents the methodology employed for design and operation, as well as the instrumentation needed for the development of a parabolic trough test plant, being a guideline for standardization facilities.Keywords: parabolic trough, concentrated solar power, CSP, solar power, test plant, energy efficiency, performance characterization, renewable energy
Procedia PDF Downloads 1201883 Cosmetic Recommendation Approach Using Machine Learning
Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake
Abstract:
The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.Keywords: content-based filtering, cosmetics, machine learning, recommendation system
Procedia PDF Downloads 1351882 Signaling Theory: An Investigation on the Informativeness of Dividends and Earnings Announcements
Authors: Faustina Masocha, Vusani Moyo
Abstract:
For decades, dividend announcements have been presumed to contain important signals about the future prospects of companies. Similarly, the same has been presumed about management earnings announcements. Despite both dividend and earnings announcements being considered informative, a number of researchers questioned their credibility and found both to contain short-term signals. Pertaining to dividend announcements, some authors argued that although they might contain important information that can result in changes in share prices, which consequently results in the accumulation of abnormal returns, their degree of informativeness is less compared to other signaling tools such as earnings announcements. Yet, this claim in favor has been refuted by other researchers who found the effect of earnings to be transitory and of little value to shareholders as indicated by the little abnormal returns earned during the period surrounding earnings announcements. Considering the above, it is apparent that both dividends and earnings have been hypothesized to have a signaling impact. This prompts one to question which between these two signaling tools is more informative. To answer this question, two follow-up questions were asked. The first question sought to determine the event which results in the most effect on share prices, while the second question focused on the event that influenced trading volume the most. To answer the first question and evaluate the effect that each of these events had on share prices, an event study methodology was employed on a sample made up of the top 10 JSE-listed companies for data collected from 2012 to 2019 to determine if shareholders gained abnormal returns (ARs) during announcement dates. The event that resulted in the most persistent and highest amount of ARs was considered to be more informative. Looking at the second follow-up question, an investigation was conducted to determine if either dividends or earnings announcements influenced trading patterns, resulting in abnormal trading volumes (ATV) around announcement time. The event that resulted in the most ATV was considered more informative. Using an estimation period of 20 days and an event window of 21 days, and hypothesis testing, it was found that announcements pertaining to the increase of earnings resulted in the most ARs, Cumulative Abnormal Returns (CARs) and had a lasting effect in comparison to dividend announcements whose effect lasted until day +3. This solidifies some empirical arguments that the signaling effect of dividends has become diminishing. It was also found that when reported earnings declined in comparison to the previous period, there was an increase in trading volume, resulting in ATV. Although dividend announcements did result in abnormal returns, they were lesser than those acquired during earnings announcements which refutes a number of theoretical and empirical arguments that found dividends to be more informative than earnings announcements.Keywords: dividend signaling, event study methodology, information content of earnings, signaling theory
Procedia PDF Downloads 1771881 Management in the Transport of Pigs to Slaughterhouses in the Valle De Aburrá, Antioquia
Authors: Natalia Uribe Corrales, María Fernanda Benavides Erazo, Santiago Henao Villegas
Abstract:
Introduction: Transport is a crucial link in the porcine chain because it is considered a stressful event in the animal, due to it is a new environment, which generates new interactions, together with factors such as speed, noise, temperature changes, vibrations, deprivation of food and water. Therefore, inadequate handling at this stage can lead to bruises, musculoskeletal injuries, fatigue, and mortality, resulting in canal seizures and economic losses. Objective: To characterize the transport and driving practices for the mobilization of standing pigs directed to slaughter plants in the Valle de Aburrá, Antioquia, Colombia in 2017. Methods: A descriptive cross-sectional study was carried out with the transporters arriving at the slaughterhouses approved by National Institute for Food and Medicine Surveillance (INVIMA) during 2017 in the Valle de Aburrá. The process of obtaining the samples was made from probabilistic sampling. Variables such as journey time, mechanical technical certificate, training in animal welfare, driving speed, material, and condition of floors and separators, supervision of animals during the trip, load density and mortality were analyzed. It was approved by the ethics committee for the use and care of animals CICUA of CES University, Act number 14 of 2015. Results: 190 trucks were analyzed, finding that 12.4% did not have updated mechanical technical certificate; the transporters experience in pig’s transportation was an average of 9.4 years (d.e.7.5). The 85.8% reported not having received training in animal welfare. Other results were that the average speed was 63.04km/hr (d.e 13.46) and the 62% had floors in good condition; nevertheless, the 48% had bad conditions on separators. On the other hand, the 88% did not supervise their animals during the journey, although the 62.2% had an adequate loading density, in relation to the average mortality was 0.2 deaths/travel (d.e. 0.5). Conclusions: Trainers should be encouraged on issues such as proper maintenance of vehicles, animal welfare, obligatory review of animals during mobilization and speed of driving, as these poorly managed indicators generate stress in animals, increasing generation of injuries as well as possible accidents; also, it is necessary to continue to improve aspects such as aluminum floors and separators that favor easy cleaning and maintenance, as well as the appropriate handling in the density of load that generates animal welfare.Keywords: animal welfare, driving practices, pigs, truck infrastructure
Procedia PDF Downloads 2081880 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 536