Search results for: time series models
1727 Examining Terrorism through a Constructivist Framework: Case Study of the Islamic State
Authors: Shivani Yadav
Abstract:
The Study of terrorism lends itself to the constructivist framework as constructivism focuses on the importance of ideas and norms in shaping interests and identities. Constructivism is pertinent to understand the phenomenon of a terrorist organization like the Islamic State (IS), which opportunistically utilizes radical ideas and norms to shape its ‘politics of identity’. This ‘identity’, which is at the helm of preferences and interests of actors, in turn, shapes actions. The paper argues that an effective counter-terrorism policy must recognize the importance of ideas in order to counter the threat arising from acts of radicalism and terrorism. Traditional theories of international relations, with an emphasis on state-centric security problematic, exhibit several limitations and problems in interpreting the phenomena of terrorism. With the changing global order, these theories have failed to adapt to the changing dimensions of terrorism, especially ‘newer’ actors like the Islamic State (IS). The paper observes that IS distinguishes itself from other terrorist organizations in the way that it recruits and spreads its propaganda. Not only are its methods different, but also its tools (like social media) are new. Traditionally, too, force alone has rarely been sufficient to counter terrorism, but it seems especially impossible to completely root out an organization like IS. Time is ripe to change the discourse around terrorism and counter-terrorism strategies. The counter-terrorism measures adopted by states, which primarily focus on mitigating threats to the national security of the state, are preoccupied with statist objectives of the continuance of state institutions and maintenance of order. This limitation prevents these theories from addressing the questions of justice and the ‘human’ aspects of ideas and identity. These counter-terrorism strategies adopt a problem-solving approach that attempts to treat the symptoms without diagnosing the disease. Hence, these restrictive strategies fail to look beyond calculated retaliation against violent actions in order to address the underlying causes of discontent pertaining to ‘why’ actors turn violent in the first place. What traditional theories also overlook is that overt acts of violence may have several causal factors behind them, some of which are rooted in the structural state system. Exploring these root causes through the constructivist framework helps to decipher the process of ‘construction of terror’ and to move beyond the ‘what’ in theorization in order to describe ‘why’, ‘how’ and ‘when’ terrorism occurs. Study of terrorism would much benefit from a constructivist analysis in order to explore non-military options while countering the ideology propagated by the IS.Keywords: constructivism, counter terrorism, Islamic State, politics of identity
Procedia PDF Downloads 1891726 Development of a Stable RNAi-Based Biological Control for Sheep Blowfly Using Bentonite Polymer Technology
Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody
Abstract:
Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls and the parasite has developed resistance to nearly all control chemicals used in the past. It is therefore critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.Keywords: flystrike, RNA interference, bentonite polymer technology, Lucillia cuprina
Procedia PDF Downloads 921725 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter
Authors: Zhu Xinxin, Wang Hui, Yang Kai
Abstract:
Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter
Procedia PDF Downloads 1181724 A Comparative Study to Evaluate Changes in Intraocular Pressure with Thiopentone Sodium and Etomidate in Patients Undergoing Surgery for Traumatic Brain Injury
Authors: Vasudha Govil, Prashant Kumar, Ishwar Singh, Kiranpreet Kaur
Abstract:
Traumatic brain injury leads to elevated intracranial pressure. Intraocular pressure (IOP) may also be affected by intracranial pressure. Increased venous pressure in the cavernous sinus is transmitted to the episcleral veins, resulting in an increase in IOP. All drugs used in anesthesia induction can change IOP. Irritation of the gag reflex after usage of the endotracheal tube can also increase IOP; therefore, the administration of anesthetic drugs, which make the lowest change in IOP, is important, while cardiovascular depression must also be avoided. Thiopentone decreases IOP by 40%, whereas etomidate decreases IOP by 30-60% for up to 5 minutes. Hundred patients (age 18-55 years) who underwent emergency craniotomy for TBI are selected for the study. Patients are randomly assigned to two groups of 50 patients each accord¬ing to the drugs used for induction: group T was given thiopentone sodium (5mg kg-1) and group E was given etomi¬date (0.3mg kg-1). Preanaesthesia intraocular pressure (IOP) was measured using Schiotz tonometer. Induction of anesthesia was achieved with etomidate (0.3mg kg-1) or thiopentone (5mg kg-1) along with fentanyl (2 mcg kg-1). Intravenous rocuronium (0.9mg kg-1) was given to facilitate intubation. Intraocular pressure was measured after 1 minute of induction agent administration and 5 minutes after intubation. Maintainance of anesthesia was done with isoflurane in 50% nitrous oxide with fresh gas flow of 5 litres. At the end of the surgery, the residual neuromuscular block was reversed and the patient was shifted to ward/ICU. Patients in both groups were comparable in terms of demographic profile. There was no significant difference between the groups for the hemody¬namic and respiratory variables prior to thiopentone or etomidate administration. Intraocular pressure in thiopentone group in left eye and right eye before induction was 14.97±3.94 mmHg and 14.72±3.75 mmHg respectively and for etomidate group was 15.28±3.69 mmHg and 15.54±4.46 mmHg respectively. After induction IOP decreased significantly in both the eyes (p<0.001) in both the groups. After 5 min of intubation IOP was significantly less than the baseline in both the eyes but it was more than the IOP after induction with the drug. It was found that there was no statistically significant difference in IOP between the two groups at any point of time. Both the drugs caused a significant decrease in IOP after induction and after 5 minutes of endotracheal intubation. The mechanism of decrease in IOP by intravenous induction agents is debatable. Systemic hypotension after the induction of anaesthesia has been shown to cause a decrease in intra-ocular pressure. A decrease in the tone of the extra-ocular muscles can also result in a decrease in intra-ocular pressure. We observed that it is appropriate to use etomidate as an induction agent when elevation of intra-ocular pressure is undesirable owing to the cardiovascular stability it confers in the patients.Keywords: etomidate, intraocular pressure, thiopentone, traumatic
Procedia PDF Downloads 1261723 Long Term Survival after a First Transient Ischemic Attack in England: A Case-Control Study
Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski
Abstract:
Transient ischaemic attacks (TIAs) are warning signs for future strokes. TIA patients are at increased risk of stroke and cardio-vascular events after a first episode. A majority of studies on TIA focused on the occurrence of these ancillary events after a TIA. Long-term mortality after TIA received only limited attention. We undertook this study to determine the long-term hazards of all-cause mortality following a first episode of a TIA using anonymised electronic health records (EHRs). We used a retrospective case-control study using electronic primary health care records from The Health Improvement Network (THIN) database. Patients born prior to or in year 1960, resident in England, with a first diagnosis of TIA between January 1986 and January 2017 were matched to three controls on age, sex and general medical practice. The primary outcome was all-cause mortality. The hazards of all-cause mortality were estimated using a time-varying Weibull-Cox survival model which included both scale and shape effects and a random frailty effect of GP practice. 20,633 cases and 58,634 controls were included. Cases aged 39 to 60 years at the first TIA event had the highest hazard ratio (HR) of mortality compared to matched controls (HR = 3.04, 95% CI (2.91 - 3.18)). The HRs for cases aged 61-70 years, 71-76 years and 77+ years were 1.98 (1.55 - 2.30), 1.79 (1.20 - 2.07) and 1.52 (1.15 - 1.97) compared to matched controls. Aspirin provided long-term survival benefits to cases. Cases aged 39-60 years on aspirin had HR of 0.93 (0.84 - 1.00), 0.90 (0.82 - 0.98) and 0.88 (0.80 - 0.96) at 5 years, 10 years and 15 years, respectively, compared to cases in the same age group who were not on antiplatelets. Similar beneficial effects of aspirin were observed in other age groups. There were no significant survival benefits with other antiplatelet options. No survival benefits of antiplatelet drugs were observed in controls. Our study highlights the excess long-term risk of death of TIA patients and cautions that TIA should not be treated as a benign condition. The study further recommends aspirin as the better option for secondary prevention for TIA patients compared to clopidogrel recommended by NICE guidelines. Management of risk factors and treatment strategies should be important challenges to reduce the burden of disease.Keywords: dual antiplatelet therapy (DAPT), General Practice, Multiple Imputation, The Health Improvement Network(THIN), hazard ratio (HR), Weibull-Cox model
Procedia PDF Downloads 1491722 Community Based Psychosocial Intervention Reduces Maternal Depression and Infant Development in Bangladesh
Authors: S. Yesmin, N. F.Rahman, R. Akther, T. Begum, T. Tahmid, T. Chowdury, S. Afrin, J. D. Hamadani
Abstract:
Abstract: Maternal depression is one of the risk factors of developmental delay in young children in low-income countries. Maternal depressions during pregnancy are rarely reported in Bangladesh. Objectives: The purpose of the present study was to examine the efficacy of a community based psychosocial intervention on women with mild to moderate depressive illness during the perinatal period and on their children from birth to 12 months on mothers’ mental status and their infants’ growth and development. Methodology: The study followed a prospective longitudinal approach with a randomized controlled design. Total 250 pregnant women aged between 15 and 40 years were enrolled in their third trimester of pregnancy of which 125 women were in the intervention group and 125 in the control group. Women in the intervention group received the “Thinking Healthy (CBT based) program” at their home setting, from their last month of pregnancy till 10 months after delivery. Their children received psychosocial stimulation from birth till 12 months. The following instruments were applied to get the outcome information- Bangla version of Edinburgh Postnatal Depression Scale (BEPDS), Prenatal Attachment Inventory (PAI), Maternal Attachment Inventory (MAI), Bayley Scale of Infant Development-Third version (Bayley–III) and Family Care Indicator (FCI). In addition, sever morbidity; breastfeeding, immunization, socio-economic and demographic information were collected. Data were collected at three time points viz. baseline, midline (6 months after delivery) and endline (12 months after delivery). Results: There was no significant difference between any of the socioeconomic and demographic variables at baseline. A very preliminary analysis of the data shows an intervention effect on Socioemotional behaviour of children at endline (p<0.001), motor development at midline (p=0.016) and at endline (p=0.065), language development at midline (p=0.004) and at endline (p=0.023), cognitive development at midline (p=0.008) and at endline (p=0.002), and quality of psychosocial stimulation at midline (p=0.023) and at endline (p=0.010). EPDS at baseline was not different between the groups (p=0.419), but there was a significant improvement at midline (p=0.027) and at endline (p=0.024) between the groups following the intervention. Conclusion: Psychosocial intervention is found effective in reducing women’s low and moderate depressive illness to cope with mental health problem and improving development of young children in Bangladesh.Keywords: mental health, maternal depression, infant development, CBT, EPDS
Procedia PDF Downloads 2721721 Evaluating Value of Users' Personal Information Based on Cost-Benefit Analysis
Authors: Jae Hyun Park, Sangmi Chai, Minkyun Kim
Abstract:
As users spend more time on the Internet, the probability of their personal information being exposed has been growing. This research has a main purpose of investigating factors and examining relationships when Internet users recognize their value of private information with a perspective of an economic asset. The study is targeted on Internet users, and the value of their private information will be converted into economic figures. Moreover, how economic value changes in relation with individual attributes, dealer’s traits, circumstantial properties will be studied. In this research, the changes in factors on private information value responding to different situations will be analyzed in an economic perspective. Additionally, this study examines the associations between users’ perceived risk and value of their personal information. By using the cost-benefit analysis framework, the hypothesis that the user’s sense in private information value can be influenced by individual attributes and situational properties will be tested. Therefore, this research will attempt to provide answers for three research objectives. First, this research will identify factors that affect value recognition of users’ personal information. Second, it provides evidences that there are differences on information system users’ economic value of information responding to personal, trade opponent, and situational attributes. Third, it investigates the impact of those attributes on individuals’ perceived risk. Based on the assumption that personal, trade opponent and situation attributes make an impact on the users’ value recognition on private information, this research will present the understandings on the different impacts of those attributes in recognizing the value of information with the economic perspective and prove the associative relationships between perceived risk and decision on the value of users’ personal information. In order to validate our research model, this research used the regression methodology. Our research results support that information breach experience and information security systems is associated with users’ perceived risk. Information control and uncertainty are also related to users’ perceived risk. Therefore, users’ perceived risk is considered as a significant factor on evaluating the value of personal information. It can be differentiated by trade opponent and situational attributes. This research presents new perspective on evaluating the value of users’ personal information in the context of perceived risk, personal, trade opponent and situational attributes. It fills the gap in the literature by providing how users’ perceived risk are associated with personal, trade opponent and situation attitudes in conducting business transactions with providing personal information. It adds to previous literature that the relationship exists between perceived risk and the value of users’ private information in the economic perspective. It also provides meaningful insights to the managers that in order to minimize the cost of information breach, managers need to recognize the value of individuals’ personal information and decide the proper amount of investments on protecting users’ online information privacy.Keywords: private information, value, users, perceived risk, online information privacy, attributes
Procedia PDF Downloads 2391720 Comparison the Effectiveness of Pain Cognitive- Behavioral Therapy and Its Computerized Version on Reduction of Pain Intensity, Depression, Anger and Anxiety in Children with Cancer: A Randomized Controlled Trial
Authors: Najmeh Hamid, Vajiheh Hamedy , Zahra Rostamianasl
Abstract:
Background: Cancer is one of the medical problems that have been associated with pain. Moreover, the pain is combined with negative emotions such as anxiety, depression and anger. Poor pain management causes negative effects on the quality of life, which results in negative effects that continue a long time after the painful experiences. Objectives: The aim of this research was to compare the effectiveness of Common Cognitive Behavioral Therapy for Pain and its computerized version on the reduction of pain intensity, depression, anger and anxiety in children with cancer. Methods: The research method of this “Randomized Controlled Clinical Trial” was a pre, post-test and follow-up with a control group. In this research, we have examined the effectiveness of Common Cognitive Behavioral Therapy for Pain and its computerized version on the reduction of pain intensity, anxiety, depression and anger in children with cancer in Ahvaz. Two psychological interventions (cognitive behavioral therapy for pain and the computerized version) were compared with the control group. The sample consisted of 60 children aged 8 to 12 years old with different types of cancer at Shafa hospital in Ahwaz. According to the including and excluding criteria such as age, socioeconomic status, clinical diagnostic interview and other criteria, 60 subjects were selected. Then, randomly, 45 subjects were selected. The subjects were randomly divided into three groups of 15 (two experimental and one control group). The research instruments included Spielberger Anxiety Inventory (STAY-2) and International Pain Measurement Scale. The first experimental group received 6 sessions of cognitive-behavioral therapy for 6 weeks, and the second group was subjected to a computerized version of cognitive-behavioral therapy for 6 weeks, but the control group did not receive any interventions. For ethical considerations, a version of computerized cognitive-behavioral therapy was provided to them. After 6 weeks, all three groups were evaluated as post-test and eventually after a one-month follow-up. Results: The findings of this study indicated that both interventions could reduce the negative emotions (pain, anger, anxiety, depression) associated with cancer in children in comparison with a control group (p<0.0001). In addition, there were no significant differences between the two interventions (p<0.01). It means both interventions are useful for reducing the negative effects of pain and enhancing adjustment. Conclusion: we can use CBT in situations in which there is no access to psychologists and psychological services. In addition, it can be a useful alternative to conventional psychological interventions.Keywords: pain, children, psychological intervention, cancer, anger, anxiety, depression
Procedia PDF Downloads 801719 Nursing Professionals’ Perception of the Work Environment, Safety Climate and Job Satisfaction in the Brazilian Hospitals during the COVID-19 Pandemic
Authors: Ana Claudia de Souza Costa, Beatriz de Cássia Pinheiro Goulart, Karine de Cássia Cavalari, Henrique Ceretta Oliveira, Edineis de Brito Guirardello
Abstract:
Background: During the COVID-19 pandemic, nursing represents the largest category of health professionals who were on the front line. Thus, investigating the practice environment and the job satisfaction of nursing professionals during the pandemic becomes fundamental since it reflects on the quality of care and the safety climate. The aim of this study was to evaluate and compare the nursing professionals' perception of the work environment, job satisfaction, and safety climate of the different hospitals and work shifts during the COVID-19 pandemic. Method: This is a cross-sectional survey with 130 nursing professionals from public, private and mixed hospitals in Brazil. For data collection, was used an electronic form containing the personal and occupational variables, work environment, job satisfaction, and safety climate. The data were analyzed using descriptive statistics and ANOVA or Kruskal-Wallis tests according to the data distribution. The distribution was evaluated by means of the Shapiro-Wilk test. The analysis was done in the SPSS 23 software, and it was considered a significance level of 5%. Results: The mean age of the participants was 35 years (±9.8), with a mean time of 6.4 years (±6.7) of working experience in the institution. Overall, the nursing professionals evaluated the work environment as favorable; they were dissatisfied with their job in terms of pay, promotion, benefits, contingent rewards, operating procedures and satisfied with coworkers, nature of work, supervision, and communication, and had a negative perception of the safety climate. When comparing the hospitals, it was found that they did not differ in their perception of the work environment and safety climate. However, they differed with regard to job satisfaction, demonstrating that nursing professionals from public hospitals were more dissatisfied with their work with regard to promotion when compared to professionals from private (p=0.02) and mixed hospitals (p< 0.01) and nursing professionals from mixed hospitals were more satisfied than those from private hospitals (p= 0.04) with regard to supervision. Participants working in night shifts had the worst perception of the work environment related to nurse participation in hospital affairs (p= 0.02), nursing foundations for quality care (p= 0.01), nurse manager ability, leadership and support (p= 0.02), safety climate (p< 0.01), job satisfaction related to contingent rewards (p= 0.04), nature of work (p= 0.03) and supervision (p< 0.01). Conclusion: The nursing professionals had a favorable perception of the environment and safety climate but differed among hospitals regarding job satisfaction for the promotion and supervision domains. There was also a difference between the participants regarding the work shifts, being the night shifts, those with the lowest scores, except for satisfaction with operational conditions.Keywords: health facility environment, job satisfaction, patient safety, nursing
Procedia PDF Downloads 1571718 Effect of the Orifice Plate Specifications on Coefficient of Discharge
Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer
Abstract:
On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications
Procedia PDF Downloads 1191717 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 2481716 Juvenile Fish Associated with Pondweed and Charophyte Habitat: A Case Study Using Upgraded Pop-up Net in the Estuarine Part of the Curonian Lagoon
Authors: M. Bučas, A. Skersonas, E. Ivanauskas, J. Lesutienė, N. Nika, G. Srėbalienė, E. Tiškus, J. Gintauskas, A.Šaškov, G. Martin
Abstract:
Submerged vegetation enhances heterogeneity of sublittoral habitats; therefore, macrophyte stands are essential elements of aquatic ecosystems to maintain a diverse fish fauna. Fish-habitat relations have been extensively studied in streams and coastal waters, but in lakes and estuaries are still underestimated. The aim of this study is to assess temporal (diurnal and seasonal) patterns of fish juvenile assemblages associated with common submerged macrophyte habitats, which have significantly spread during the recent decade in the upper littoral part of the Curonian Lagoon. The assessment was performed by means of an upgraded pop-up net approach resulting in much precise sampling versus other techniques. The optimal number of samples (i.e., pop-up nets) required to cover>80% of the total number of fish species depended on the time of the day in both study sites: at least 7and 9 nets in the evening (18-24 pm) in the Southern and Northern study sites, respectively. In total, 14 fish species were recorded, where perch and roach dominated (respectively 48% and 24%). From multivariate analysis, water salinity and seasonality (temperature or sampling month) were primary factors determining fish assemblage composition. The southern littoral area, less affected by brackish water conditions, hosted a higher number of species (13) than in the Northern site (8). In the latter site, brackish water tolerant species (three-spined and nine-spined sticklebacks, spiny loach, roach, and round goby) were more abundant than in the Southern site. Perch and ruffe dominated in the Southern site. Spiny loach and nine-spined stickleback were more frequent in September, while ruffe, perch, and roach occurred more in July. The diel dynamics of the common species such as perch, roach, and ruffe followed the general pattern, but it was species specific and depended on the study site, habitat, and month. The species composition between macrophyte habitats did not significantly differ; however, it differed from the results obtained in 2005 at both study sites indicating the importance of expanded charophyte stands during the last decade in the littoral zone.Keywords: diel dynamics, charophytes, pondweeds, herbivorous and benthivorous fishes, littoral, nursery habitat, shelter
Procedia PDF Downloads 1891715 Economic Impact of Drought on Agricultural Society: Evidence Based on a Village Study in Maharashtra, India
Authors: Harshan Tee Pee
Abstract:
Climate elements include surface temperatures, rainfall patterns, humidity, type and amount of cloudiness, air pressure and wind speed and direction. Change in one element can have an impact on the regional climate. The scientific predictions indicate that global climate change will increase the number of extreme events, leading to more frequent natural hazards. Global warming is likely to intensify the risk of drought in certain parts and also leading to increased rainfall in some other parts. Drought is a slow advancing disaster and creeping phenomenon– which accumulate slowly over a long period of time. Droughts are naturally linked with aridity. But droughts occur over most parts of the world (both wet and humid regions) and create severe impacts on agriculture, basic household welfare and ecosystems. Drought condition occurs at least every three years in India. India is one among the most vulnerable drought prone countries in the world. The economic impacts resulting from extreme environmental events and disasters are huge as a result of disruption in many economic activities. The focus of this paper is to develop a comprehensive understanding about the distributional impacts of disaster, especially impact of drought on agricultural production and income through a panel study (drought year and one year after the drought) in Raikhel village, Maharashtra, India. The major findings of the study indicate that cultivating area as well as the number of cultivating households reduced after the drought, indicating a shift in the livelihood- households moved from agriculture to non-agriculture. Decline in the gross cropped area and production of various crops depended on the negative income from these crops in the previous agriculture season. All the landholding categories of households except landlords had negative income in the drought year and also the income disparities between the households were higher in that year. In the drought year, the cost of cultivation was higher for all the landholding categories due to the increased cost for irrigation and input cost. In the drought year, agriculture products (50 per cent of the total products) were used for household consumption rather than selling in the market. It is evident from the study that livelihood which was based on natural resources became less attractive to the people to due to the risk involved in it and people were moving to less risk livelihood for their sustenance.Keywords: climate change, drought, agriculture economics, disaster impact
Procedia PDF Downloads 1181714 In vitro Study of Inflammatory Gene Expression Suppression of Strawberry and Blackberry Extracts
Authors: Franco Van De Velde, Debora Esposito, Maria E. Pirovani, Mary A. Lila
Abstract:
The physiology of various inflammatory diseases is a complex process mediated by inflammatory and immune cells such as macrophages and monocytes. Chronic inflammation, as observed in many cardiovascular and autoimmune disorders, occurs when the low-grade inflammatory response fails to resolve with time. Because of the complexity of the chronic inflammatory disease, major efforts have focused on identifying novel anti-inflammatory agents and dietary regimes that prevent the pro-inflammatory process at the early stage of gene expression of key pro-inflammatory mediators and cytokines. The ability of the extracts of three blackberry cultivars (‘Jumbo’, ‘Black Satin’ and ‘Dirksen’), and one strawberry cultivar (‘Camarosa’) to inhibit four well-known genetic biomarkers of inflammation: inducible nitric oxide synthase (iNOS), cyclooxynase-2 (Cox-2), interleukin-1β (IL-1β) and interleukin-6 (IL-6) in an in vitro lipopolysaccharide-stimulated murine RAW 264.7 macrophage model were investigated. Moreover, the effect of latter extracts on the intracellular reactive oxygen species (ROS) and nitric oxide (NO) production was assessed. Assay was conducted with 50 µg/mL crude extract concentration, an amount that is easily achievable in the gastrointestinal tract after berries consumption. The mRNA expression levels of Cox-2 and IL-6 were reduced consistently (more than 30%) by extracts of ‘Jumbo’ and ‘Black Satin’ blackberries. Strawberry extracts showed high reduction in mRNA expression levels of IL-6 (more than 65%) and exhibited moderate reduction in mRNA expression of Cox-2 (more than 35%). The latter behavior mirrors the intracellular ROS production of the LPS stimulated RAW 264.7 macrophages after the treatment with blackberry ‘Black Satin’ and ‘Jumbo’, and strawberry ‘Camarosa’ extracts, suggesting that phytochemicals from these fruits may play a role in the health maintenance by reducing oxidative stress. On the other hand, effective inhibition in the gene expression of IL-1β and iNOS was not observed by any of blackberry and strawberry extracts. However, suppression in the NO production in the activated macrophages among 5–25% was observed by ‘Jumbo’ and ‘Black Satin’ blackberry extracts and ‘Camarosa’ strawberry extracts, suggesting a higher NO suppression property by phytochemicals of these fruits. All these results suggest the potential beneficial effects of studied berries as functional foods with antioxidant and anti-inflammatory roles. Moreover, the underlying role of phytochemicals from these fruits in the protection of inflammatory process will deserve to be further explored.Keywords: cyclooxygenase-2, functional foods, interleukin-6, reactive oxygen species
Procedia PDF Downloads 2371713 TARF: Web Toolkit for Annotating RNA-Related Genomic Features
Abstract:
Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.Keywords: RNA-related genomic features, annotation, visualization, web server
Procedia PDF Downloads 2081712 Basics for Corruption Reduction and Fraud Prevention in Industrial/Humanitarian Organizations through Supplier Management in Supply Chain Systems
Authors: Ibrahim Burki
Abstract:
Unfortunately, all organizations (Industrial and Humanitarian/ Non-governmental organizations) are prone to fraud and corruption in their supply chain management routines. The reputational and financial fallout can be disastrous. With the growing number of companies using suppliers based in the local market has certainly increased the threat of fraud as well as corruption. There are various potential threats like, poor or non-existent record keeping, purchasing of lower quality goods at higher price, excessive entertainment of staff by suppliers, deviations in communications between procurement staff and suppliers, such as calls or text messaging to mobile phones, staff demanding extended periods of notice before they allow an audit to take place, inexperienced buyers and more. But despite all the above-mentioned threats, this research paper emphasize upon the effectiveness of well-maintained vendor/s records and sorting/filtration of vendor/s to cut down the possible threats of corruption and fraud. This exercise is applied in a humanitarian organization of Pakistan but it is applicable to whole South Asia region due to the similarity of culture and contexts. In that firm, there were more than 550 (five hundred and fifty) registered vendors. As during the disasters or emergency phases requirements are met on urgent basis thus, providing golden opportunities for the fake companies or for the brother/sister companies of the already registered companies to be involved in the tendering process without declaration or even under some different (new) company’s name. Therefore, a list of required documents (along with checklist) was developed and sent to all of the vendor(s) in the current database and based upon the receipt of the requested documents vendors were sorted out. Furthermore, these vendors were divided into active (meeting the entire set criterion) and non-active groups. This initial filtration stage allowed the firm to continue its work without a complete shutdown that is only vendors falling in the active group shall be allowed to participate in the tenders by the time whole process is completed. Likewise only those companies or firms meeting the set criterion (active category) shall be allowed to get registered in the future along with a dedicated filing system (soft and hard shall be maintained), and all of the companies/firms in the active group shall be physically verified (visited) by the Committee comprising of senior members of at least Finance department, Supply Chain (other than procurement) and Security department.Keywords: corruption reduction, fraud prevention, supplier management, industrial/humanitarian organizations
Procedia PDF Downloads 5391711 The Ideal Memory Substitute for Computer Memory Hierarchy
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
Computer system components such as the CPU, the Controllers, and the operating system, work together as a team, and storage or memory is the essential parts of this team apart from the processor. The memory and storage system including processor caches, main memory, and storage, form basic storage component of a computer system. The characteristics of the different types of storage are inherent in the design and the technology employed in the manufacturing. These memory characteristics define the speed, compatibility, cost, volatility, and density of the various storage types. Most computers rely on a hierarchy of storage devices for performance. The effective and efficient use of the memory hierarchy of the computer system therefore is the single most important aspect of computer system design and use. The memory hierarchy is becoming a fundamental performance and energy bottleneck, due to the widening gap between the increasing demands of modern computer applications and the limited performance and energy efficiency provided by traditional memory technologies. With the dramatic development in the computers systems, computer storage has had a difficult time keeping up with the processor speed. Computer architects are therefore facing constant challenges in developing high-speed computer storage with high-performance which is energy-efficient, cost-effective and reliable, to intercept processor requests. It is very clear that substantial advancements in redesigning the existing memory physical and logical structures to meet up with the latest processor potential is crucial. This research work investigates the importance of computer memory (storage) hierarchy in the design of computer systems. The constituent storage types of the hierarchy today were investigated looking at the design technologies and how the technologies affect memory characteristics: speed, density, stability and cost. The investigation considered how these characteristics could best be harnessed for overall efficiency of the computer system. The research revealed that the best single type of storage, which we refer to as ideal memory is that logical single physical memory which would combine the best attributes of each memory type that make up the memory hierarchy. It is a single memory with access speed as high as one found in CPU registers, combined with the highest storage capacity, offering excellent stability in the presence or absence of power as found in the magnetic and optical disks as against volatile DRAM, and yet offers a cost-effective attribute that is far away from the expensive SRAM. The research work suggests that to overcome these barriers it may then mean that memory manufacturing will take a total deviation from the present technologies and adopt one that overcomes the associated challenges with the traditional memory technologies.Keywords: cache, memory-hierarchy, memory, registers, storage
Procedia PDF Downloads 1641710 Development of a Test Plant for Parabolic Trough Solar Collectors Characterization
Authors: Nelson Ponce Jr., Jonas R. Gazoli, Alessandro Sete, Roberto M. G. Velásquez, Valério L. Borges, Moacir A. S. de Andrade
Abstract:
The search for increased efficiency in generation systems has been of great importance in recent years to reduce the impact of greenhouse gas emissions and global warming. For clean energy sources, such as the generation systems that use concentrated solar power technology, this efficiency improvement impacts a lower investment per kW, improving the project’s viability. For the specific case of parabolic trough solar concentrators, their performance is strongly linked to their geometric precision of assembly and the individual efficiencies of their main components, such as parabolic mirrors and receiver tubes. Thus, for accurate efficiency analysis, it should be conducted empirically, looking for mounting and operating conditions like those observed in the field. The Brazilian power generation and distribution company Eletrobras Furnas, through the R&D program of the National Agency of Electrical Energy, has developed a plant for testing parabolic trough concentrators located in Aparecida de Goiânia, in the state of Goiás, Brazil. The main objective of this test plant is the characterization of the prototype concentrator that is being developed by the company itself in partnership with Eudora Energia, seeking to optimize it to obtain the same or better efficiency than the concentrators of this type already known commercially. This test plant is a closed pipe system where a pump circulates a heat transfer fluid, also calledHTF, in the concentrator that is being characterized. A flow meter and two temperature transmitters, installed at the inlet and outlet of the concentrator, record the parameters necessary to know the power absorbed by the system and then calculate its efficiency based on the direct solar irradiation available during the test period. After the HTF gains heat in the concentrator, it flows through heat exchangers that allow the acquired energy to be dissipated into the ambient. The goal is to keep the concentrator inlet temperature constant throughout the desired test period. The developed plant performs the tests in an autonomous way, where the operator must enter the HTF flow rate in the control system, the desired concentrator inlet temperature, and the test time. This paper presents the methodology employed for design and operation, as well as the instrumentation needed for the development of a parabolic trough test plant, being a guideline for standardization facilities.Keywords: parabolic trough, concentrated solar power, CSP, solar power, test plant, energy efficiency, performance characterization, renewable energy
Procedia PDF Downloads 1181709 Cosmetic Recommendation Approach Using Machine Learning
Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake
Abstract:
The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.Keywords: content-based filtering, cosmetics, machine learning, recommendation system
Procedia PDF Downloads 1341708 Signaling Theory: An Investigation on the Informativeness of Dividends and Earnings Announcements
Authors: Faustina Masocha, Vusani Moyo
Abstract:
For decades, dividend announcements have been presumed to contain important signals about the future prospects of companies. Similarly, the same has been presumed about management earnings announcements. Despite both dividend and earnings announcements being considered informative, a number of researchers questioned their credibility and found both to contain short-term signals. Pertaining to dividend announcements, some authors argued that although they might contain important information that can result in changes in share prices, which consequently results in the accumulation of abnormal returns, their degree of informativeness is less compared to other signaling tools such as earnings announcements. Yet, this claim in favor has been refuted by other researchers who found the effect of earnings to be transitory and of little value to shareholders as indicated by the little abnormal returns earned during the period surrounding earnings announcements. Considering the above, it is apparent that both dividends and earnings have been hypothesized to have a signaling impact. This prompts one to question which between these two signaling tools is more informative. To answer this question, two follow-up questions were asked. The first question sought to determine the event which results in the most effect on share prices, while the second question focused on the event that influenced trading volume the most. To answer the first question and evaluate the effect that each of these events had on share prices, an event study methodology was employed on a sample made up of the top 10 JSE-listed companies for data collected from 2012 to 2019 to determine if shareholders gained abnormal returns (ARs) during announcement dates. The event that resulted in the most persistent and highest amount of ARs was considered to be more informative. Looking at the second follow-up question, an investigation was conducted to determine if either dividends or earnings announcements influenced trading patterns, resulting in abnormal trading volumes (ATV) around announcement time. The event that resulted in the most ATV was considered more informative. Using an estimation period of 20 days and an event window of 21 days, and hypothesis testing, it was found that announcements pertaining to the increase of earnings resulted in the most ARs, Cumulative Abnormal Returns (CARs) and had a lasting effect in comparison to dividend announcements whose effect lasted until day +3. This solidifies some empirical arguments that the signaling effect of dividends has become diminishing. It was also found that when reported earnings declined in comparison to the previous period, there was an increase in trading volume, resulting in ATV. Although dividend announcements did result in abnormal returns, they were lesser than those acquired during earnings announcements which refutes a number of theoretical and empirical arguments that found dividends to be more informative than earnings announcements.Keywords: dividend signaling, event study methodology, information content of earnings, signaling theory
Procedia PDF Downloads 1721707 Management in the Transport of Pigs to Slaughterhouses in the Valle De Aburrá, Antioquia
Authors: Natalia Uribe Corrales, María Fernanda Benavides Erazo, Santiago Henao Villegas
Abstract:
Introduction: Transport is a crucial link in the porcine chain because it is considered a stressful event in the animal, due to it is a new environment, which generates new interactions, together with factors such as speed, noise, temperature changes, vibrations, deprivation of food and water. Therefore, inadequate handling at this stage can lead to bruises, musculoskeletal injuries, fatigue, and mortality, resulting in canal seizures and economic losses. Objective: To characterize the transport and driving practices for the mobilization of standing pigs directed to slaughter plants in the Valle de Aburrá, Antioquia, Colombia in 2017. Methods: A descriptive cross-sectional study was carried out with the transporters arriving at the slaughterhouses approved by National Institute for Food and Medicine Surveillance (INVIMA) during 2017 in the Valle de Aburrá. The process of obtaining the samples was made from probabilistic sampling. Variables such as journey time, mechanical technical certificate, training in animal welfare, driving speed, material, and condition of floors and separators, supervision of animals during the trip, load density and mortality were analyzed. It was approved by the ethics committee for the use and care of animals CICUA of CES University, Act number 14 of 2015. Results: 190 trucks were analyzed, finding that 12.4% did not have updated mechanical technical certificate; the transporters experience in pig’s transportation was an average of 9.4 years (d.e.7.5). The 85.8% reported not having received training in animal welfare. Other results were that the average speed was 63.04km/hr (d.e 13.46) and the 62% had floors in good condition; nevertheless, the 48% had bad conditions on separators. On the other hand, the 88% did not supervise their animals during the journey, although the 62.2% had an adequate loading density, in relation to the average mortality was 0.2 deaths/travel (d.e. 0.5). Conclusions: Trainers should be encouraged on issues such as proper maintenance of vehicles, animal welfare, obligatory review of animals during mobilization and speed of driving, as these poorly managed indicators generate stress in animals, increasing generation of injuries as well as possible accidents; also, it is necessary to continue to improve aspects such as aluminum floors and separators that favor easy cleaning and maintenance, as well as the appropriate handling in the density of load that generates animal welfare.Keywords: animal welfare, driving practices, pigs, truck infrastructure
Procedia PDF Downloads 2081706 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 5351705 Continuous Glucose Monitoring Systems and the Improvement in Hypoglycemic Awareness Post-Islet Transplantation: A Single-Centre Cohort Study
Authors: Clare Flood, Shareen Forbes
Abstract:
Background: Type 1 diabetes mellitus (T1DM) is an autoimmune disorder affecting >400,000 people in the UK alone, with the global prevalence expected to double in the next decade. Islet transplant offers a minimally-invasive procedure with very low morbidity and almost no mortality, and is now as effective as whole pancreas transplant. The procedure was introduced to the UK in 2011 for patients with the most severe type 1 diabetes mellitus (T1DM) – those with unstable blood glucose, frequently occurring episodes of severe hypoglycemia and impaired awareness of hypoglycemia (IAH). Objectives: To evaluate the effectiveness of islet transplantation in improving glycemic control, reducing the burden of hypoglycemia and improving awareness of hypoglycemia through a single-centre cohort study at the Royal Infirmary of Edinburgh. Glycemic control and degree of hypoglycemic awareness will be determined and monitored pre- and post-transplantation to determine effectiveness of the procedure. Methods: A retrospective analysis of data collected over three years from the 16 patients who have undergone islet transplantation in Scotland. Glycated haemoglobin (HbA1c) was measured and continuous glucose monitoring systems (CGMS) were utilised to assess glycemic control, while Gold and Clarke score questionnaires tested IAH. Results: All patients had improved glycemic control following transplant, with optimal control seen visually at 3 months post-transplant. Glycemic control significantly improved, as illustrated by percentage time in hypoglycemia in the months following transplant (p=0.0211) and HbA1c (p=0.0426). Improved Clarke (p=0.0034) and Gold (p=0.0001) scores indicate improved glycemic awareness following transplant. Conclusion: While the small sample of islet transplant recipients at the Royal Infirmary of Edinburgh prevents definitive conclusions being drawn, it is indicated that through our retrospective, single-centre cohort study of 16 patients, islet transplant is capable of improving glycemic control, reducing the burden of hypoglycemia and IAH post-transplant. Data can be combined with similar trials at other centres to increase statistical power but from research in Edinburgh, it can be suggested that the minimally invasive procedure of islet transplantation offers selected patients with extremely unstable T1DM the incredible opportunity to regain control of their condition and improve their quality of life.Keywords: diabetes, islet, transplant, CGMS
Procedia PDF Downloads 2711704 Effect of Sodium Arsenite Exposure on Pharmacodynamic of Meloxicam in Male Wistar Rats
Authors: Prashantkumar Waghe, N. Prakash, N. D. Prasada, L. V. Lokesh, M. Vijay Kumar, Vinay Tikare
Abstract:
Arsenic is a naturally occurring metalloid with potent toxic effects. It is ubiquitous in the environment and released from both natural and anthropogenic sources. It has the potential to cause various health hazards in exposed populations. Arsenic exposure through drinking water is considered as one of the most serious global environmental threats including Southeast Asia. The aim of present study was to evaluate the modulatory role of subacute exposure to sodium (meta) arsenite on the antinociceptive, anti-inflammatory and antipyretic responses mediated by meloxicam in rats. Rats were exposed to arsenic as sodium arsenite through drinking water for 28 days. A single dose of meloxicam (2 mg/kg b. wt.) was administered by oral gavage on the 29th day. The exact time of meloxicam administration depended on the type of test. Rats were divided randomly into 5 groups (n=6). Group I served as normal control and received arsenic free drinking water, while rats in group II were maintained similar to Group I but received meloxicam on 29th day. Groups III, IV and V were pre-exposed to arsenic through drinking water at 0.5, 5.0 and 50 ppm, respectively, for 28 days and was administered meloxicam next day and; pain and inflammation carried out by using formalin-induced nociception and carrageenan-induced inflammatory model(s), respectively by using standard protocol. For assessment of antipyretic effects, one more additional group (Group VI) was taken and given LPS @ 1.8 mg/kg b. wt. for induction of pyrexia (LPS control). Higher dose of arsenic inhibited the meloxicam mediated antinociceptive, anti-inflammatory and antipyretic responses. Further, meloxicam inhibited the arsenic induced level of tumor necrosis factor-α, inetrleukin-1β, interleukin -6 and COX2 mediated prostaglandin E2 in hind paw muscle. These results suggest a functional antagonism of meloxicam by arsenic. This may relate to arsenic mediated local release of tumor necrosis factor-α, inetrleukin-1β, interleukin -6 releases COX2 mediated prostaglandin E2. Based on the experimental study, it is concluded that sub-acute exposure to arsenic through drinking water aggravate pyrexia, inflammation and pain at environment relevant concentration and decrease the therapeutic efficacy of meloxicam at higher level of arsenite exposure. Thus, the observation made has clinical relevance in situations where animals are exposed to arsenite epidemic geographical locations.Keywords: arsenic, analgesic activity, meloxicam, Wistar rats
Procedia PDF Downloads 1851703 The Determination of Pb and Zn Phytoremediation Potential and Effect of Interaction between Cadmium and Zinc on Metabolism of Buckwheat (Fagopyrum Esculentum)
Authors: Nurdan Olguncelik Kaplan, Aysen Akay
Abstract:
Nowadays soil pollution has become a global problem. External added polluters to the soil are destroying and changing the structure of the soil and the problems are becoming more complex and in this sense the correction of these problems is going to be harder and more costly. Cadmium has got a fast mobility in the soil and plant system because of that cadmium can interfere very easily to the human and animal food chain and in the same time this can be very dangerous. The cadmium which is absorbed and stored by the plants is causing to many metabolic changes of the plants like; protein synthesis, nitrogen and carbohydrate metabolism, enzyme (nitrate reductase) activation, photo and chlorophyll synthesis. The biological function of cadmium is not known over the plants and it is not a necessary element. The plant is generally taking in small amounts the cadmium and this element is competing with the zinc. Cadmium is causing root damages. Buckwheat (Fagopyrum esculentum) is an important nutraceutical because of its high content of flavonoids, minerals and vitamins, and their nutritionally balanced amino-acid composition. Buckwheat has relatively high biomass productivity, is adapted to many areas of the world, and can flourish in sterile fields; therefore buckwheat plants are widely used for the phytoremediation process.The aim of this study were to evaluate the phytoremediation capacity of the high-yielding plant Buckwheat (Fagopyrum esculentum) in soils contaminated with Cd and Zn. The soils were applied to differrent doses cd(0-12.5-25-50-100 mg Cd kg−1 soil in the form of 3CdSO4.8H2O ) and Zn (0-10-30 mg Zn kg−1 soil in the form of ZnSO4.7H2O) and incubated about 60 days. Later buckwheat seeds were sown and grown for three mounth under greenhouse conditions. The test plants were irrigated by using pure water after the planting process. Buckwheat seeds (Gunes and Aktas species) were taken from Bahri Dagdas International Agricultural Research. After harvest, Cd and Zn concentrations of plant biomass and grain, yield and translocation factors (TFs) for Cd and Cd were determined. Cadmium accumulation in biomass and grain significantly increased in dose-dependent manner. Long term field trials are required to further investigate the potential of buckwheat to reclaimed the soil. But this could be undertaken in conjunction with actual remediation schemes. However, the differences in element accumulation among the genotypes were affected more by the properties of genotypes than by the soil properties. Gunes genotype accumulated higher lead than Aktas genotypes.Keywords: buckwheat, cadmium, phytoremediation, zinc
Procedia PDF Downloads 4171702 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images
Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi
Abstract:
Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis
Procedia PDF Downloads 591701 Formulation and Test of a Model to explain the Complexity of Road Accident Events in South Africa
Authors: Dimakatso Machetele, Kowiyou Yessoufou
Abstract:
Whilst several studies indicated that road accident events might be more complex than thought, we have a limited scientific understanding of this complexity in South Africa. The present project proposes and tests a more comprehensive metamodel that integrates multiple causality relationships among variables previously linked to road accidents. This was done by fitting a structural equation model (SEM) to the data collected from various sources. The study also fitted the GARCH Model (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict the future of road accidents in the country. The analysis shows that the number of road accidents has been increasing since 1935. The road fatality rate follows a polynomial shape following the equation: y = -0.0114x²+1.2378x-2.2627 (R²=0.76) with y = death rate and x = year. This trend results in an average death rate of 23.14 deaths per 100,000 people. Furthermore, the analysis shows that the number of crashes could be significantly explained by the total number of vehicles (P < 0.001), number of registered vehicles (P < 0.001), number of unregistered vehicles (P = 0.003) and the population of the country (P < 0.001). As opposed to expectation, the number of driver licenses issued and total distance traveled by vehicles do not correlate significantly with the number of crashes (P > 0.05). Furthermore, the analysis reveals that the number of casualties could be linked significantly to the number of registered vehicles (P < 0.001) and total distance traveled by vehicles (P = 0.03). As for the number of fatal crashes, the analysis reveals that the total number of vehicles (P < 0.001), number of registered (P < 0.001) and unregistered vehicles (P < 0.001), the population of the country (P < 0.001) and the total distance traveled by vehicles (P < 0.001) correlate significantly with the number of fatal crashes. However, the number of casualties and again the number of driver licenses do not seem to determine the number of fatal crashes (P > 0.05). Finally, the number of crashes is predicted to be roughly constant overtime at 617,253 accidents for the next 10 years, with the worse scenario suggesting that this number may reach 1 896 667. The number of casualties was also predicted to be roughly constant at 93 531 overtime, although this number may reach 661 531 in the worst-case scenario. However, although the number of fatal crashes may decrease over time, it is forecasted to reach 11 241 fatal crashes within the next 10 years, with the worse scenario estimated at 19 034 within the same period. Finally, the number of fatalities is also predicted to be roughly constant at 14 739 but may also reach 172 784 in the worse scenario. Overall, the present study reveals the complexity of road accidents and allows us to propose several recommendations aimed to reduce the trend of road accidents, casualties, fatal crashes, and death in South Africa.Keywords: road accidents, South Africa, statistical modelling, trends
Procedia PDF Downloads 1611700 Effects of Vertimax Training on Agility, Quickness and Acceleration
Authors: Dede Basturk, Metin Kaya, Halil Taskin, Nurtekin Erkmen
Abstract:
In total, 29 students studying in Selçuk University Physical Training and Sports School who are recreationally active participated voluntarilyin this study which was carried out in order to examine effects of Vertimax trainings on agility, quickness and acceleration. 3 groups took their parts in this study as Vertimax training group (N=10), Ordinary training group (N=10) and Control group (N=9). Measurements were carried out in performance laboratory of Selçuk University Physical Training and Sports School. A training program for quickness and agility was followed up for subjects 3 days a week (Monday, Wednesday, Friday) for 8 weeks. Subjects taking their parts in vertimax training group and ordinary training group participated in the training program for quickness and agility. Measurements were applied as pre-test and post-test. Subjects of vertimax training group followed the training program with vertimax device and subjects of ordinary training group followed the training program without vertimax device. As to control group who are recreationally active, they did not participate in any program. 4 gate photocells were used for measuring and measurement of distances was carried out in m. Furthermore, single gate photocell and honi were used for agility test. Measurements started with 15 minutes of warm-up. Acceleration, quickness and agility tests were applied on subjects. 3 measurements were made for each subject at 3 minutes resting intervals. The best rating of three measurements was recorded. 5 m quickness pre-test value of vertimax training groups has been determined as 1,11±0,06 s and post-test value has been determined as 1,06 ± 0,08 s (P<0,05). 5 m quickness pre-test value of ordinary training group has been determined as 1,11±0,06 s and post-test value has been determined as 1,07±0,07 s (P<0,05).5 m quickness pre-test value of control group has been determined as 1,13±0,08 s and post-test value has been determined as 1,10 ± 0,07 s (P>0,05). Upon examination of 10 m acceleration value before and after the training, 10 m acceleration pre-test value of vertimax training group has been determined as 1,82 ± 0,07 s and post-test value has been determined as 1,76±0,83 s (P>0,05). 10 m acceleration pre-test value of ordinary training group has been determined as 1,83±0,05 s and post-test value has been determined as 1,78 ± 0,08 s (P>0,05).10 m acceleration pre-test value of control group has been determined as 1,87±0,11 s and post-test value has been determined as 1,83 ± 0,09 s (P>0,05). Upon examination of 15 m acceleration value before and after the training, 15 m acceleration pre-test value of vertimax training group has been determined as 2,52±0,10 s and post-test value has been determined as 2,46 ± 0,11 s (P>0,05).15 m acceleration pre-test value of ordinary training group has been determined as 2,52±0,05 s and post-test value has been determined as 2,48 ± 0,06 s (P>0,05). 15 m acceleration pre-test value of control group has been determined as 2,55 ± 0,11 s and post-test value has been determined as 2,54 ± 0,08 s (P>0,05).Upon examination of agility performance before and after the training, agility pre-test value of vertimax training group has been determined as 9,50±0,47 s and post-test value has been determined as 9,66 ± 0,47 s (P>0,05). Agility pre-test value of ordinary training group has been determined as 9,99 ± 0,05 s and post-test value has been determined as 9,86 ± 0,40 s (P>0,05). Agility pre-test value of control group has been determined as 9,74 ± 0,45 s and post-test value has been determined as 9,92 ± 0,49 s (P>0,05). Consequently, it has been observed that quickness and acceleration features were developed significantly following 8 weeks of vertimax training program and agility features were not developed significantly. It is suggested that training practices used for the study may be used for situations which may require sudden moves and in order to attain the maximum speed in a short time. Nevertheless, it is also suggested that this training practice does not make contribution in development of moves which may require sudden direction changes. It is suggested that productiveness and innovation may come off in terms of training by using various practices of vertimax trainings.Keywords: vertimax, training, quickness, agility, acceleration
Procedia PDF Downloads 4961699 Social Skills as a Significant Aspect of a Successful Start of Compulsory Education
Authors: Eva Šmelová, Alena Berčíková
Abstract:
The issue of school maturity and readiness of a child for a successful start of compulsory education is one of the long-term monitored areas, especially in the context of education and psychology. In the context of the curricular reform in the Czech Republic, the issue has recently gained importance. Analyses of research in this area suggest a lack of a broader overview of indicators informing about the current level of children’s school maturity and school readiness. Instead, various studies address partial issues. Between 2009 and 2013 a research study was performed at the Faculty of Education, Palacký University Olomouc (Czech Republic) focusing on children’s maturity and readiness for compulsory education. In this study, social skills were of marginal interest; the main focus was on the mental area. This previous research is smoothly linked with the present study, the objective of which is to identify the level of school maturity and school readiness in selected characteristics of social skills as part of the adaptation process after enrolment in compulsory education. In this context, the following research question has been formulated: During the process of adaptation to the school environment, which social skills are weakened? The method applied was observation, for the purposes of which the authors developed a research tool – record sheet with 11 items – social skills that a child should have by the end of preschool education. The items were assessed by first-grade teachers at the beginning of the school year. The degree of achievement and intensity of the skills were assessed for each child using an assessment scale. In the research, the authors monitored a total of three independent variables (gender, postponement of school attendance, participation in inclusive education). The effect of these independent variables was monitored using 11 dependent variables. These variables are represented by the results achieved in selected social skills. Statistical data processing was assisted by the Computer Centre of Palacký University Olomouc. Statistical calculations were performed using SPSS v. 12.0 for Windows and STATISTICA: StatSoft STATISTICA CR, Cz (software system for data analysis). The research sample comprised 115 children. In their paper, the authors present the results of the research and at the same time point to possible areas of further investigation. They also highlight possible risks associated with weakened social skills.Keywords: compulsory education, curricular reform, educational diagnostics, pupil, school curriculum, school maturity, school readiness, social skills
Procedia PDF Downloads 2511698 The Importance of Anthropometric Indices for Assessing the Physical Development and Physical Fitness of Young Athletes
Authors: Akbarova Gulnozakhon
Abstract:
Relevance. Physical exercises can prolong the function of the growth zones of long tubular bones, delay the fusion of the epiphyses and diaphyses of bones and, thus, increase the growth of the body. At the same time, intensive strength exercises can accelerate the process of ossification of bone growth zones and slow down their growth in length. The influence of physical exercises on the process of biological maturation is noted. Gymnastics, which requires intense speed and strength loads, delays puberty. On the other hand, it is indicated that the relatively slow puberty of gymnasts is associated with the selection of girls with a special somatotype in this sport. It was found that the later onset of menstruation in female athletes does not have a negative effect on the maturation process and fertility (the ability to procreate). Observations are made about the normalizing influence of sports on the puberty of girls. The purpose of the study. Our goal is to study physical activity of varying intensity on the formation of secondary sexual characteristics and hormonal status of girls in adolescence. Each biological process peculiar to a given organism is not in a stationary state, but fluctuates with a certain frequency. According to the duration, there are, for example, circadian cycles, and infradian cycles, a typical example of which is the menstrual cycle. Materials and methods, results. Violations of menstrual function in athletes were detected by applying a questionnaire survey that contains several paragraphs and sub-paragraphs where passport data, anthropometric indicators, taking into account anthropometric indices, information about the menstrual cycle are indicated. Of 135 female athletes aged 1-3 to 16 years engaged in various sports - gymnasts, menstrual function disorders were noted in 86.7% (primary or secondary amenorrhea, irregular MC), in swimming-in 57.1%. The general condition also changes during the menstrual cycle. In a large percentage of cases, athletes indicate an increase in irritability in the premenstrual (45%) and menstrual (36%) phases. During these phases, girls note an increase in fatigue of 46.5% and 58% (respectively). In girls, secondary sexual characteristics continue to form during puberty and the clearest indicator of the onset of puberty is the age of the onset of the first menstruation - menarche. Conclusions. 1. Physical exercise has a positive effect on all major systems of the body and thus promotes health.2. Along with a beneficial effect on human health, physical exercise, if the requirements of sports are not observed, can be harmful.Keywords: girls health, anthropometric, physical development, reproductive health
Procedia PDF Downloads 102