Search results for: ISD tool
411 Predicting Susceptibility to Coronary Artery Disease using Single Nucleotide Polymorphisms with a Large-Scale Data Extraction from PubMed and Validation in an Asian Population Subset
Authors: K. H. Reeta, Bhavana Prasher, Mitali Mukerji, Dhwani Dholakia, Sangeeta Khanna, Archana Vats, Shivam Pandey, Sandeep Seth, Subir Kumar Maulik
Abstract:
Introduction Research has demonstrated a connection between coronary artery disease (CAD) and genetics. We did a deep literature mining using both bioinformatics and manual efforts to identify the susceptible polymorphisms in coronary artery disease. Further, the study sought to validate these findings in an Asian population. Methodology In first phase, we used an automated pipeline which organizes and presents structured information on SNPs, Population and Diseases. The information was obtained by applying Natural Language Processing (NLP) techniques to approximately 28 million PubMed abstracts. To accomplish this, we utilized Python scripts to extract and curate disease-related data, filter out false positives, and categorize them into 24 hierarchical groups using named Entity Recognition (NER) algorithms. From the extensive research conducted, a total of 466 unique PubMed Identifiers (PMIDs) and 694 Single Nucleotide Polymorphisms (SNPs) related to coronary artery disease (CAD) were identified. To refine the selection process, a thorough manual examination of all the studies was carried out. Specifically, SNPs that demonstrated susceptibility to CAD and exhibited a positive Odds Ratio (OR) were selected, and a final pool of 324 SNPs was compiled. The next phase involved validating the identified SNPs in DNA samples of 96 CAD patients and 37 healthy controls from Indian population using Global Screening Array. ResultsThe results exhibited out of 324, only 108 SNPs were expressed, further 4 SNPs showed significant difference of minor allele frequency in cases and controls. These were rs187238 of IL-18 gene, rs731236 of VDR gene, rs11556218 of IL16 gene and rs5882 of CETP gene. Prior researches have reported association of these SNPs with various pathways like endothelial damage, susceptibility of vitamin D receptor (VDR) polymorphisms, and reduction of HDL-cholesterol levels, ultimately leading to the development of CAD. Among these, only rs731236 had been studied in Indian population and that too in diabetes and vitamin D deficiency. For the first time, these SNPs were reported to be associated with CAD in Indian population. Conclusion: This pool of 324 SNP s is a unique kind of resource that can help to uncover risk associations in CAD. Here, we validated in Indian population. Further, validation in different populations may offer valuable insights and contribute to the development of a screening tool and may help in enabling the implementation of primary prevention strategies targeted at the vulnerable population.Keywords: coronary artery disease, single nucleotide polymorphism, susceptible SNP, bioinformatics
Procedia PDF Downloads 76410 C-Spine Imaging in a Non-trauma Centre: Compliance with NEXUS Criteria Audit
Authors: Andrew White, Abigail Lowe, Kory Watkins, Hamed Akhlaghi, Nicole Winter
Abstract:
The timing and appropriateness of diagnostic imaging are critical to the evaluation and management of traumatic injuries. Within the subclass of trauma patients, the prevalence of c-spine injury is less than 4%. However, the incidence of delayed diagnosis within this cohort has been documented as up to 20%, with inadequate radiological examination most cited issue. In order to assess those in which c-spine injury cannot be fully excluded based on clinical examination alone and, therefore, should undergo diagnostic imaging, a set of criteria is used to provide clinical guidance. The NEXUS (National Emergency X-Radiography Utilisation Study) criteria is a validated clinical decision-making tool used to facilitate selective c-spine radiography. The criteria allow clinicians to determine whether cervical spine imaging can be safely avoided in appropriate patients. The NEXUS criteria are widely used within the Emergency Department setting given their ease of use and relatively straightforward application and are used in the Victorian State Trauma System’s guidelines. This audit utilized retrospective data collection to examine the concordance of c-spine imaging in trauma patients to that of the NEXUS criteria and assess compliance with state guidance on diagnostic imaging in trauma. Of the 183 patients that presented with trauma to the head, neck, or face (244 excluded due to incorrect triage), 98 did not undergo imaging of the c-spine. Out of those 98, 44% fulfilled at least one of the NEXUS criteria, meaning the c-spine could not be clinically cleared as per the current guidelines. The criterion most met was intoxication, comprising 42% (18 of 43), with midline spinal tenderness (or absence of documentation of this) the second most common with 23% (10 of 43). Intoxication being the most met criteria is significant but not unexpected given the cohort of patients seen at St Vincent’s and within many emergency departments in general. Given these patients will always meet NEXUS criteria, an element of clinical judgment is likely needed, or concurrent use of the Canadian C-Spine Rules to exclude the need for imaging. Midline tenderness as a met criterion was often in the context of poor or absent documentation relating to this, emphasizing the importance of clear and accurate assessments. The distracting injury was identified in 7 out of the 43 patients; however, only one of these patients exhibited a thoracic injury (T11 compression fracture), with the remainder comprising injuries to the extremities – some studies suggest that C-spine imaging may not be required in the evaluable blunt trauma patient despite distracting injuries in any body regions that do not involve the upper chest. This emphasises the need for standardised definitions for distracting injury, at least at a departmental/regional level. The data highlights the currently poor application of the NEXUS guidelines, with likely common themes throughout emergency departments, highlighting the need for further education regarding implementation and potential refinement/clarification of criteria. Of note, there appeared to be no significant differences between levels of experience with respect to inappropriately clearing the c-spine clinically with respect to the guidelines.Keywords: imaging, guidelines, emergency medicine, audit
Procedia PDF Downloads 72409 The Role of Anti-corruption Clauses in the Fight Against Corruption in Petroleum Sector
Authors: Azar Mahmoudi
Abstract:
Despite the rise of global anti-corruption movements and the strong emergence of international and national anti-corruption laws, corrupt practices are still prevalent in most places, and countries still struggle to translate these laws into practice. On the other hand, in most countries, political and economic elites oppose anti-corruption reforms. In such a situation, the role of external actors, like the other States, international organizations, and transnational actors, becomes essential. Among them, Transnational Corporations [TNCs] can develop their own regime-like framework to govern their internal activities, and through this, they can contribute to the regimes established by State actors to solve transnational issues. Among various regimes, TNCs may choose to comply with the transnational anti-corruption legal regime to avoid the cost of non-compliance with anti-corruption laws. As a result, they decide to strenghen their anti-corruption compliance as they expand into new overseas markets. Such a decision extends anti-corruption standards among their employees and third-party agents and within their projects across countries. To better address the challenges posed by corruption, TNCs have adopted a comprehensive anti-corruption toolkit. Among the various instruments, anti-corruption clauses have become one of the most anti-corruption means in international commercial agreements. Anti-corruption clauses, acting as a due diligence tool, can protect TNCs against the engagement of third-party agents in corrupt practices and further promote anti-corruption standards among businesses operating across countries. An anti-corruption clause allows parties to create a contractual commitment to exclude corrupt practices during the term of their agreement, including all levels of negotiation and implementation. Such a clause offers companies a mechanism to reduce the risk of potential corruption in their dealings with third parties while avoiding civil and administrative penalties. There have been few attempts to examine the role of anti-corruption clauses in the fight against corruption; therefore, this paper aims to fill this gap and examine anti-corruption clauses in a specific sector where corrupt practices are widespread and endemic, i.e., the petroleum industry. This paper argues that anti-corruption clauses are a positive step in ensuring that the petroleum industry operates in an ethical and transparent manner, helping to reducing the risk of corruption and promote integrity in this sector. Contractual anti-corruption clauses vary in terms of the types commitment, so parties have a wide range of options to choose from for their preferred clauses incorporated within their contracts. This paper intends to propose a categorization of anti-corruption clauses in the petroleum sector. It examines particularly the anti-corruption clauses incorporated in transnational hydrocarbon contracts published by the Resource Contract Portal, an online repository of extractive contracts. Then, this paper offers a quantitative assessment of anti-corruption clauses according to the types of contract, the date of conclusion, and the geographical distribution.Keywords: anti-corruption, oil and gas, transnational corporations, due diligence, contractual clauses, hydrocarbon, petroleum sector
Procedia PDF Downloads 132408 The Active Social Live of #Lovewins: Understanding the Discourse of Homosexual Love and Rights in Thailand
Authors: Tinnaphop Sinsomboonthong
Abstract:
The hashtag, #LoveWins, has been widely used for celebrating the victory of the LGBTQ movement since June 2015 when the US Supreme Court enacted the rights of same-sex marriage. Nowadays, the hashtag is generally used among active social media users in many countries, including Thailand. Amidst the political conflict between advocates of the junta-backed legislation related to same-sex marriage laws, known as ‘Thailand’s Civil Partnership Draft Bills,’ and its detractors, the hashtag becomes crucial for Thailand’s 2019 national election season and shortly afterward as it was one of the most crucial parts of a political campaign to rebrand many political parties’ image, create an LGBT-friendly atmosphere and neutralize the bi-polarized politics of the law. The use of the hashtag is, therefore, not just an online entertainment but a politico-discursive tool, used by many actors for many purposes. Behind the confrontation between supporters and opposers of the law, the hashtag is used by both sides to highlight the Western-centric normativity of homosexual love, closely associated with Eurocentric modernity and heteronormativity. As an online ethnographical study, this paper aims to analyze how #LoveWins is used among Thai social media users in late 2018 to mid-2019 and how it is signified by Thai social media users during the Drafted-Bills period and the 2019 national election. A number of preliminary surveys of data on Twitter were conducted in December 2018 and, more intensely, in January 2019. Later, the data survey was officially conducted twice during February and April 2019, while the data collection was done during May-June 2019. Only public posts on Twitter that include the hashtag, #LoveWins, or any hashtags quoting ‘love’ and ‘wins’ are the main targets of this research. As a result of this, the use of the hashtag can be categorized into three levels, including banal decoration, homosexual love celebration, and colonial discourse on homosexual love. Particularly in the third type of the use of the hashtag, discourse analysis is applied to reveal that this hashtag is closely associated with the discourse of development and modernity as most of the descriptive posts demonstrate aspirations to become more ‘developed and modernized’ like many Western countries and Taiwan, the LGBT capital in Asia. Thus, calls for the ‘right to homosexual love’ and the ‘right to same-sex marriage’ in Thailand are shaped and formulated within the discursive linkage between modernity, development, and love. Also, the use of #LoveWins can be considered as a de-queering process of love as only particular types of gender identity, sexual orientation, and relationships that reflect Eurocentric modernity and heteronormativity are acceptable and advocated. Due to this, more inclusive queer loves should be supported rather than a mere essentialist-traditionalist homosexual love. Homonormativity must be deconstructed, and love must no longer be reserved for only one particular type of relationship that is standardized from/by the West. It must become more inclusive.Keywords: #LoveWins, homosexual love, LGBT rights, same-sex marriage
Procedia PDF Downloads 140407 Using Balanced Scorecard Performance Metrics in Gauging the Delivery of Stakeholder Value in Higher Education: the Assimilation of Industry Certifications within a Business Program Curriculum
Authors: Thomas J. Bell III
Abstract:
This paper explores the value of assimilating certification training within a traditional course curriculum. This innovative approach is believed to increase stakeholder value within the Computer Information System program at Texas Wesleyan University. Stakeholder value is obtained from increased job marketability and critical thinking skills that create employment-ready graduates. This paper views value as first developing the capability to earn an industry-recognized certification, which provides the student with more job placement compatibility while allowing the use of critical thinking skills in a liberal arts business program. Graduates with industry-based credentials are often given preference in the hiring process, particularly in the information technology sector. And without a pioneering curriculum that better prepares students for an ever-changing employment market, its educational value is dubiously questioned. Since certifications are trending in the hiring process, academic programs should explore the viability of incorporating certification training into teaching pedagogy and courses curriculum. This study will examine the use of the balanced scorecard across four performance dimensions (financial, customer, internal process, and innovation) to measure the stakeholder value of certification training within a traditional course curriculum. The balanced scorecard as a strategic management tool may provide insight for leveraging resource prioritization and decisions needed to achieve various curriculum objectives and long-term value while meeting multiple stakeholders' needs, such as students, universities, faculty, and administrators. The research methodology will consist of quantitative analysis that includes (1) surveying over one-hundred students in the CIS program to learn what factor(s) contributed to their certification exam success or failure, (2) interviewing representatives from the Texas Workforce Commission to identify the employment needs and trends in the North Texas (Dallas/Fort Worth) area, (3) reviewing notable Workforce Innovation and Opportunity Act publications on training trends across several local business sectors, and (4) analyzing control variables to identify specific correlations between industry alignment and job placement to determine if a correlation exists. These findings may provide helpful insight into impactful pedagogical teaching techniques and curriculum that positively contribute to certification credentialing success. And should these industry-certified students land industry-related jobs that correlate with their certification credential value, arguably, stakeholder value has been realized.Keywords: certification exam teaching pedagogy, exam preparation, testing techniques, exam study tips, passing certification exams, embedding industry certification and curriculum alignment, balanced scorecard performance evaluation
Procedia PDF Downloads 108406 Synthesis and Characterization of pH-Sensitive Graphene Quantum Dot-Loaded Metal-Organic Frameworks for Targeted Drug Delivery and Fluorescent Imaging
Authors: Sayed Maeen Badshah, Kuen-Song Lin, Abrar Hussain, Jamshid Hussain
Abstract:
Liver cancer is a significant global health issue, ranking fifth in incidence and second in mortality. Effective therapeutic strategies are urgently needed to combat this disease, particularly in regions with high prevalence. This study focuses on developing and characterizing fluorescent organometallic frameworks as distinct drug delivery carriers with potential applications in both the treatment and biological imaging of liver cancer. This work introduces two distinct organometallic frameworks: the cake-shaped GQD@NH₂-MIL-125 and the cross-shaped M8U6/FM8U6. The GQD@NH₂-MIL-125 framework is particularly noteworthy for its high fluorescence, making it an effective tool for biological imaging. X-ray diffraction (XRD) analysis revealed specific diffraction peaks at 6.81ᵒ (011), 9.76ᵒ (002), and 11.69ᵒ (121), with an additional significant peak at 26ᵒ (2θ), corresponding to the carbon material. Morphological analysis using Field Emission Scanning Electron Microscopy (FE-SEM), and Transmission Electron Microscopy (TEM) demonstrated that the framework has a front particle size of 680 nm and a side particle size of 55±5 nm. High-resolution TEM (HR-TEM) images confirmed the successful attachment of graphene quantum dots (GQDs) onto the NH2-MIL-125 framework. Fourier-Transform Infrared (FT-IR) spectroscopy identified crucial functional groups within the GQD@NH₂-MIL-125 structure, including O-Ti-O metal bonds within the 500 to 700 cm⁻¹ range, and N-H and C-N bonds at 1,646 cm⁻¹ and 1,164 cm⁻¹, respectively. BET isotherm analysis further revealed a specific surface area of 338.1 m²/g and an average pore size of 46.86 nm. This framework also demonstrated UV-active properties, as identified by UV-visible light spectra, and its photoluminescence (PL) spectra showed an emission peak around 430 nm when excited at 350 nm, indicating its potential as a fluorescent drug delivery carrier. In parallel, the cross-shaped M8U6/FM8U6 frameworks were synthesized and characterized using X-ray diffraction, which identified distinct peaks at 2θ = 7.4 (111), 8.5 (200), 9.2 (002), 10.8 (002), 12.1 (220), 16.7 (103), and 17.1 (400). FE-SEM, HR-TEM, and TEM analyses revealed particle sizes of 350±50 nm for M8U6 and 200±50 nm for FM8U6. These frameworks, synthesized from terephthalic acid (H₂BDC), displayed notable vibrational bonds, such as C=O at 1,650 cm⁻¹, Fe-O in MIL-88 at 520 cm⁻¹, and Zr-O in UIO-66 at 482 cm⁻¹. BET analysis showed specific surface areas of 740.1 m²/g with a pore size of 22.92 nm for M8U6 and 493.9 m²/g with a pore size of 35.44 nm for FM8U6. Extended X-ray Absorption Fine Structure (EXAFS) spectra confirmed the stability of Ti-O bonds in the frameworks, with bond lengths of 2.026 Å for MIL-125, 1.962 Å for NH₂-MIL-125, and 1.817 Å for GQD@NH₂-MIL-125. These findings highlight the potential of these organometallic frameworks for enhanced liver cancer therapy through precise drug delivery and imaging, representing a significant advancement in nanomaterial applications in biomedical science.Keywords: liver cancer cells, metal organic frameworks, Doxorubicin (DOX), drug release.
Procedia PDF Downloads 15405 Microbial Contamination of Cell Phones of Health Care Workers: Case Study in Mampong Municipal Government Hospital, Ghana
Authors: Francis Gyapong, Denis Yar
Abstract:
The use of cell phones has become an indispensable tool in the hospital's settings. Cell phones are used in hospitals without restrictions regardless of their unknown microbial load. However, the indiscriminate use of mobile devices, especially at health facilities, can act as a vehicle for transmitting pathogenic bacteria and other microorganisms. These potential pathogens become exogenous sources of infection for the patients and are also a potential health hazard for self and as well as family members. These are a growing problem in many health care institutions. Innovations in mobile communication have led to better patient care in diabetes, asthma, and increased in vaccine uptake via SMS. Notwithstanding, the use of cell phones can be a great potential source for nosocomial infections. Many studies reported heavy microbial contamination of cell phones among healthcare workers and communities. However, limited studies have been reported in our region on bacterial contamination on cell phones among healthcare workers. This study assessed microbial contamination of cell phones of health care workers (HCWs) at the Mampong Municipal Government Hospital (MMGH), Ghana. A cross-sectional design was used to characterize bacterial microflora on cell phones of HCWs at the MMGH. A total of thirty-five (35) swab samples of cell phones of HCWs at the Laboratory, Dental Unit, Children’s Ward, Theater and Male ward were randomly collected for laboratory examinations. A suspension of the swab samples was each streak on blood and MacConkey agar and incubated at 37℃ for 48 hours. Bacterial isolates were identified using appropriate laboratory and biochemical tests. Kirby-Bauer disc diffusion method was used to determine the antimicrobial sensitivity tests of the isolates. Data analysis was performed using SPSS version 16. All mobile phones sampled were contaminated with one or more bacterial isolates. Cell phones from the Male ward, Dental Unit, Laboratory, Theatre and Children’s ward had at least three different bacterial isolates; 85.7%, 71.4%, 57.1% and 28.6% for both Theater and Children’s ward respectively. Bacterial contaminants identified were Staphylococcus epidermidis (37%), Staphylococcus aureus (26%), E. coli (20%), Bacillus spp. (11%) and Klebsiella spp. (6 %). Except for the Children ward, E. coli was isolated at all study sites and predominant (42.9%) at the Dental Unit while Klebsiella spp. (28.6%) was only isolated at the Children’s ward. Antibiotic sensitivity testing of Staphylococcus aureus indicated that they were highly sensitive to cephalexin (89%) tetracycline (80%), gentamycin (75%), lincomycin (70%), ciprofloxacin (67%) and highly resistant to ampicillin (75%). Some of these bacteria isolated are potential pathogens and their presence on cell phones of HCWs could be transmitted to patients and their families. Hence strict hand washing before and after every contact with patient and phone be enforced to reduce the risk of nosocomial infections.Keywords: mobile phones, bacterial contamination, patients, MMGH
Procedia PDF Downloads 104404 Tailorability of Poly(Aspartic Acid)/BSA Complex by Self-Assembling in Aqueous Solutions
Authors: Loredana E. Nita, Aurica P. Chiriac, Elena Stoleru, Alina Diaconu, Tudorachi Nita
Abstract:
Self-assembly processes are an attractive method to form new and complex structures between macromolecular compounds to be used for specific applications. In this context, intramolecular and intermolecular bonds play a key role during self-assembling processes in preparation of carrier systems of bioactive substances. Polyelectrolyte complexes (PECs) are formed through electrostatic interactions, and though they are significantly below of the covalent linkages in their strength, these complexes are sufficiently stable owing to the association processes. The relative ease way of PECs formation makes from them a versatile tool for preparation of various materials, with properties that can be tuned by adjusting several parameters, such as the chemical composition and structure of polyelectrolytes, pH and ionic strength of solutions, temperature and post-treatment procedures. For example, protein-polyelectrolyte complexes (PPCs) are playing an important role in various chemical and biological processes, such as protein separation, enzyme stabilization and polymer drug delivery systems. The present investigation is focused on evaluation of the PPC formation between a synthetic polypeptide (poly(aspartic acid) – PAS) and a natural protein (bovine serum albumin - BSA). The PPC obtained from PAS and BSA in different ratio was investigated by corroboration of various techniques of characterization as: spectroscopy, microscopy, thermo-gravimetric analysis, DLS and zeta potential determination, measurements which were performed in static and/or dynamic conditions. The static contact angle of the sample films was also determined in order to evaluate the changes brought upon surface free energy of the prepared PPCs in interdependence with the complexes composition. The evolution of hydrodynamic diameter and zeta potential of the PPC, recorded in situ, confirm changes of both co-partners conformation, a 1/1 ratio between protein and polyelectrolyte being benefit for the preparation of a stable PPC. Also, the study evidenced the dependence of PPC formation on the temperature of preparation. Thus, at low temperatures the PPC is formed with compact structure, small dimension and hydrodynamic diameter, close to those of BSA. The behavior at thermal treatment of the prepared PPCs is in agreement with the composition of the complexes. From the contact angle determination results the increase of the PPC films cohesion, which is higher than that of BSA films. Also, a higher hydrophobicity corresponds to the new PPC films denoting a good adhesion of the red blood cells onto the surface of PSA/BSA interpenetrated systems. The SEM investigation evidenced as well the specific internal structure of PPC concretized in phases with different size and shape in interdependence with the interpolymer mixture composition.Keywords: polyelectrolyte – protein complex, bovine serum albumin, poly(aspartic acid), self-assembly
Procedia PDF Downloads 246403 Analysis on the Converged Method of Korean Scientific and Mathematical Fields and Liberal Arts Programme: Focusing on the Intervention Patterns in Liberal Arts
Authors: Jinhui Bak, Bumjin Kim
Abstract:
The purpose of this study is to analyze how the scientific and mathematical fields (STEM) and liberal arts (A) work together in the STEAM program. In the future STEAM programs that have been designed and developed, the humanities will act not just as a 'tool' for science technology and mathematics, but as a 'core' content to have an equivalent status. STEAM was first introduced to the Republic of Korea in 2011 when the Ministry of Education emphasized fostering creative convergence talent. Many programs have since been developed under the name STEAM, but with the majority of programs focusing on technology education, arts and humanities are considered secondary. As a result, arts is most likely to be accepted as an option that can be excluded from the teachers who run the STEAM program. If what we ultimately pursue through STEAM education is in fostering STEAM literacy, we should no longer turn arts into a tooling area for STEM. Based on this consciousness, this study analyzed over 160 STEAM programs in middle and high schools, which were produced and distributed by the Ministry of Education and the Korea Science and Technology Foundation from 2012 to 2017. The framework of analyses referenced two criteria presented in the related prior studies: normative convergence and technological convergence. In addition, we divide Arts into fine arts and liberal arts and focused on Korean Language Course which is in liberal arts and analyzed what kind of curriculum standards were selected, and what kind of process the Korean language department participated in teaching and learning. In this study, to ensure the reliability of the analysis results, we have chosen to cross-check the individual analysis results of the two researchers and only if they are consistent. We also conducted a reliability check on the analysis results of three middle and high school teachers involved in the STEAM education program. Analyzing 10 programs selected randomly from the analyzed programs, Cronbach's α .853 showed a reliable level. The results of this study are summarized as follows. First, the convergence ratio of the liberal arts was lowest in the department of moral at 14.58%. Second, the normative convergence is 28.19%, which is lower than that of the technological convergence. Third, the language and achievement criteria selected for the program were limited to functional areas such as listening, talking, reading and writing. This means that the convergence of Korean language departments is made only by the necessary tools to communicate opinions or promote scientific products. In this study, we intend to compare these results with the STEAM programs in the United States and abroad to explore what elements or key concepts are required for the achievement criteria for Korean language and curriculum. This is meaningful in that the humanities field (A), including Korean, provides basic data that can be fused into 'equivalent qualifications' with science (S), technical engineering (TE) and mathematics (M).Keywords: Korean STEAM Programme, liberal arts, STEAM curriculum, STEAM Literacy, STEM
Procedia PDF Downloads 158402 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data
Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda
Abstract:
Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection
Procedia PDF Downloads 130401 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 279400 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis
Authors: Liliia N. Butymova, Vladimir Ya Modorskii
Abstract:
To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.Keywords: aeroelasticity, labyrinth packings, oscillation phase shift, vibration
Procedia PDF Downloads 296399 Ibadan-Nigeria Citizenship Behavior Scale: Development and Validation
Authors: Benjamin O. Ehigie, Aderemi Alarape, Nyitor Shenge, Sylvester A. Okhakhume, Timileyin Fashola, Fiyinfunjah Dosumu
Abstract:
Organisational citizenship behaviour (OCB) is a construct in industrial and organisational behaviour that explains a person's voluntary commitment within an organisation, which is outside the scope of his or her contractual tasks. To attain organisational effectiveness the human factor of production is inevitable, hence the importance of employee behaviour. While the concept of organisational citizenship behavior is mostly discussed in the context of the workplace, it is reasoned that the idea could be reflective in relation to national commitment. Many developing countries in Africa, including Nigeria, suffer economic hardship today not necessarily due to poor resources but bad management of the resources. The mangers of their economies are not committed to the tenets of economic growth but engrossed in fraud, corruption, bribery, and other economic vices. It is this backdrop that necessitated the development and validation of the Ibadan-Nigeria Citizenship Behaviour (I-NCB) Scale. The study adopted a cross-sectional survey (online) research design, using 2404 postgraduate students in the Premier University of the country, with 99.2% being Nigerians and 0.8% non-Nigerians. Gender composition was 1,439 (60%) males and 965 (40%) females, 1201 (50%) were employed while 1203 50% unemployed, 74.2% of the employed were in public paid employment, 19.5% in private sector, and 6.3% were self-employed. Through literature review, 78 items were generated. Using 10 lecturers and 21 students, content and face validity were established respectively. Data collected were subjected to reliability and factor analytic statistics at p < .05 level of significance. Results of the content and face validity at 80% level of item acceptance resulted to 60 items; this was further reduced to 50 after item-total correlation using r=.30 criterion. Divergent validity of r= -.28 and convergent validity of r= .44 were obtained by correlating the I-NCB scale with standardized Counterproductive work behaviour (CWB) scale and Organisational Citizenship Behaviour (OCB) scale among the workers. The reliability coefficients obtained were; Cronbach alpha of internal consistency (α = 0.941) and split-half reliability of r = 0.728. Factor analyses of the I-NCB scale with principal component and varimax rotation yielded five factors when Eigenvalue above 1 were extracted. The factors which accounted for larger proportions of the total variance were given factor names as; Altruistic, Attachment, Affective, Civic responsibility and Allegiance. As much as there are vast journals on citizenship behaviour in organisations, there exists no standardized tool to measure citizenship behaviour of a country. The Ibadan-Nigeria Citizenship Behaviour (I-NCB) scale was consequently developed. The scale could be used to select personnel into political positions and senior administrative positions among career workers in Nigeria, with the aim of determining national commitment to service.Keywords: counterproductive work behaviour, CWB, Nigeria Citizenship Behaviour, organisational citizenship behaviour, OCB, Ibadan
Procedia PDF Downloads 253398 Using Teachers' Perceptions of Science Outreach Activities to Design an 'Optimum' Model of Science Outreach
Authors: Victoria Brennan, Andrea Mallaburn, Linda Seton
Abstract:
Science outreach programmes connect school pupils with external agencies to provide activities and experiences that enhance their exposure to science. It can be argued that these programmes not only aim to support teachers with curriculum engagement and promote scientific literacy but also provide pivotal opportunities to spark scientific interest in students. In turn, a further objective of these programmes is to increase awareness of career opportunities within this field. Although outreach work is also often described as a fun and satisfying venture, a plethora of researchers express caution to how successful the processes are to increases engagement post-16 in science. When researching the impact of outreach programmes, it is often student feedback regarding the activities or enrolment numbers to particular science courses post-16, which are generated and analysed. Although this is informative, the longevity of the programme’s impact could be better informed by the teacher’s perceptions; the evidence of which is far more limited in the literature. In addition, there are strong suggestions that teachers can have an indirect impact on a student’s own self-concept. These themes shape the focus and importance of this ongoing research project as it presents the rationale that teachers are under-used resources when it comes to considering the design of science outreach programmes. Therefore, the end result of the research will consist of a presentation of an ‘optimum’ model of outreach. The result of which should be of interest to the wider stakeholders such as universities or private or government organisations who design science outreach programmes in the hope to recruit future scientists. During phase one, questionnaires (n=52) and interviews (n=8) have generated both quantitative and qualitative data. These have been analysed using the Wilcoxon non-parametric test to compare teachers’ perceptions of science outreach interventions and thematic analysis for open-ended questions. Both of these research activities provide an opportunity for a cross-section of teacher opinions of science outreach to be obtained across all educational levels. Therefore, an early draft of the ‘optimum’ model of science outreach delivery was generated using both the wealth of literature and primary data. This final (ongoing) phase aims to refine this model using teacher focus groups to provide constructive feedback about the proposed model. The analysis uses principles of modified Grounded Theory to ensure that focus group data is used to further strengthen the model. Therefore, this research uses a pragmatist approach as it aims to focus on the strengths of the different paradigms encountered to ensure the data collected will provide the most suitable information to create an improved model of sustainable outreach. The results discussed will focus on this ‘optimum’ model and teachers’ perceptions of benefits and drawbacks when it comes to engaging with science outreach work. Although the model is still a ‘work in progress’, it provides both insight into how teachers feel outreach delivery can be a sustainable intervention tool within the classroom and what providers of such programmes should consider when designing science outreach activities.Keywords: educational partnerships, science education, science outreach, teachers
Procedia PDF Downloads 134397 The Shape of the Sculptor: Exploring Psychologist’s Perceptions of a Model of Parenting Ability to Guide Intervention in Child Custody Evaluations in South Africa
Authors: Anthony R. Townsend, Robyn L. Fasser
Abstract:
This research project provides an interpretative phenomenological analysis of a proposed conceptual model of parenting ability that has been designed to offer recommendations to guide intervention in child custody evaluations in South Africa. A recent review of the literature on child custody evaluations reveals that while there have been significant and valuable shifts in the capacity of the legal system aided by mental health professionals in understanding children and family dynamics, there remains a conceptual gap regarding the nature of parenting ability. With a view to addressing this paucity of a theoretical basis for considering parenting ability, this research project reviews a dimensional model for the assessment of parenting ability by conceiving parenting ability as a combination of good parenting and parental fitness. This model serves as a conceptual framework to guide child-custody evaluation and refine intervention in such cases to better meet the best interests of the child in a manner that bridges the professional gap between parties, legal entities, and mental health professionals. Using a model of good parenting as a point of theoretical departure, this model incorporates both intra-psychic and interpersonal attributes and behaviours of parents to form an impression of parenting ability and identify areas for potential enhancement. This research, therefore, hopes to achieve the following: (1) to provide nuanced descriptions of parents’ parenting ability; (2) to describe parents’ parenting potential; (3) to provide a parenting assessment tool for investigators in forensic family matters that will enable more useful recommendations and interventions; (4) to develop a language of consensus for investigators, attorneys, judges and parents, in forensic family matters, as to what comprises parenting ability and how this can be assessed; and (5) that all of the aforementioned will serve to advance the best interests of the children involved in such litigious matters. The evaluative promise and post-assessment prospects of this model are illustrated through three interlinking data sets: (1) the results of interviews with South African psychologists about the model, (2) retrospective analysis of care and contact evaluation reports using the model to determine if different conclusions or more specific recommendations are generated with its use and (3) the results of an interview with a psychologist who piloted this model by using it in care and contact evaluation.Keywords: alienation, attachment, best interests of the child, care and contact evaluation, children’s act (38 of 2005), child custody evaluation, civil forensics, gatekeeping, good parenting, good-enough parenting, health professions council of South Africa, family law, forensic mental healthcare practitioners, parental fitness, parenting ability, parent management training, parenting plan, problem-determined system, psychotherapy, support of other child-parent relationship, voice of the child
Procedia PDF Downloads 117396 ‘Only Amharic or Leave Quick!’: Linguistic Genocide in the Western Tigray Region of Ethiopia
Authors: Merih Welay Welesilassie
Abstract:
Language is a potent instrument that does not only serve the purpose of communication but also plays a pivotal role in shaping our cultural practices and identities. The right to choose one's language is a fundamental human right that helps to safeguard the integrity of both personal and communal identities. Language holds immense significance in Ethiopia, a nation with a diverse linguistic landscape that extends beyond mere communication to delineate administrative boundaries. Consequently, depriving Ethiopians of their linguistic rights represents a multifaceted punishment, more complex than food embargoes. In the aftermath of the civil war that shook Ethiopia in November 2020, displacing millions and resulting in the loss of hundreds of thousands of lives, concerns have been raised about the preservation of the indigenous Tigrayan language and culture. This is particularly true following the annexation of western Tigray into the Amhara region and the implementation of an Amharic-only language and culture education policy. This scholarly inquiry explores the intricacies surrounding the Amhara regional state's prohibition of Tigrayans' indigenous language and culture and the subsequent adoption of a monolingual and monocultural Amhara language and culture in western Tigray. The study adopts the linguistic genocide conceptual framework as an analytical tool to gain a deeper insight into the factors that contributed to and facilitated this significant linguistic and cultural shift. The research was conducted by interviewing ten teachers selected through a snowball sampling. Additionally, document analysis was performed to support the findings. The findings revealed that the push for linguistic and cultural assimilation was driven by various political and economic factors and the desire to promote a single language and culture policy. This process, often referred to as ‘Amharanization,’ aimed to homogenize the culture and language of the society. The Amhara authorities have enacted several measures in pursuit of their objectives, including the outlawing of the Tigrigna language, punishment for speaking Tigrigna, imposition of the Amhara language and culture, mandatory relocation, and even committing heinous acts that have inflicted immense physical and emotional suffering upon members of the Tigrayan community. Upon conducting a comprehensive analysis of the contextual factors, actions, intentions, and consequences, it has been posited that there may be instances of linguistic genocide taking place in the Western Tigray region. The present study sheds light on the severe consequences that could arise because of implementing monolingual and monocultural policies in multilingual areas. Through thoroughly scrutinizing the implications of such policies, this study provides insightful recommendations and directions for future research in this critical area.Keywords: linguistic genocide, linguistic human right, mother tongue, Western Tigray
Procedia PDF Downloads 65395 Effects of a School-based Mindfulness Intervention on Stress Levels and Emotion Regulation of Adolescent Students Enrolled in an Independent School
Authors: Tracie Catlett
Abstract:
Students enrolled in high-achieving schools are under tremendous pressure to perform at high levels inside and outside the classroom. Achievement pressure is a prevalent source of stress for students enrolled in high-achieving schools, and female students, in particular, experience a higher frequency and higher levels of stress compared to their male peers. The practice of mindfulness in a school setting is one tool that has been linked to improved self-regulation of emotions, increased positive emotions, and stress reduction. A mixed methods randomized pretest-posttest no-treatment control trial evaluated the effects of a six-session mindfulness intervention taught during a regularly scheduled life skills period in an independent day school, one type of high-achieving school. Twenty-nine students in Grades 10 and 11 were randomized by class, where Grade 11 students were in the intervention group (n = 14) and Grade 10 students were in the control group (n = 15). Findings from the study produced mixed results. There was no evidence that the mindfulness program reduced participants’ stress levels and negative emotions. In fact, contrary to what was expected, students enrolled in the intervention group experienced higher levels of stress and increased negative emotions at posttreatment when compared to pretreatment. Neither the within-group nor the between-groups changes in stress level were statistically significant, p > .05, and the between-groups effect size was small, d = .2. The study found evidence that the mindfulness program may have had a positive impact on students’ ability to regulate their emotions. The within-group comparison and the between-groups comparison at posttreatment found that students in the mindfulness course experienced statistically significant improvement in the in their ability to regulate their emotions at posttreatment, p = .009 < .05 and p =. 034 < .05, respectively. The between-groups effect size was medium, d =.7, suggesting that the positive differences in emotion regulation difficulties were substantial and have practical implications. The analysis of gender differences, as they relate to stress and emotions, revealed that female students perceive higher levels of stress and report experiencing stress more often than males. There were no gender differences when analyzing sources of stress experienced by the student participants. Both females and males experience regular achievement pressures related to their school performance and worry about their future, college acceptance, grades, and parental expectations. Females reported an increased awareness of their stress and actively engaged in practicing mindfulness to manage their stress. Students in the treatment group expressed that the practice of mindfulness resulted in feelings of relaxation and calmness.Keywords: achievement pressure, adolescents, emotion regulation, emotions, high-achieving schools, independent schools, mindfulness, negative affect, positive affect, stress
Procedia PDF Downloads 63394 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 144393 The Role of Law in the Transformation of Collective Identities in Nigeria
Authors: Henry Okechukwu Onyeiwu
Abstract:
Nigeria, with its rich tapestry of ethnicities, cultures, and religions, serves as a critical case study in understanding how law influences and shapes collective identities. This abstract delves into the historical context of legal systems in Nigeria, examining the colonial legacies that have influenced contemporary laws and how these laws interact with traditional practices and beliefs. This study examines the critical role of law in shaping and transforming collective identities in Nigeria, a nation characterized by its rich tapestry of ethnicities, cultures, and religions. The legal framework in Nigeria has evolved in response to historical, social, and political dynamics, influencing the way communities perceive themselves and interact with one another. This research highlights the interplay between law and collective identity, exploring how legal instruments, such as constitutions, statutes, and judicial rulings, have contributed to the formation, negotiation, and reformation of group identities over time. Moreover, contemporary legal debates surrounding issues such as citizenship, resource allocation, and communal conflicts further illustrate the law's role in identity formation. The legal recognition of different ethnic groups fosters a sense of belonging and collective identity among these groups, yet it simultaneously raises questions about inclusivity and equality. Laws concerning indigenous rights and affirmative action are essential in this discourse, as they reflect the necessity of balancing majority rule with minority rights—a challenge that Nigeria continues to navigate. By employing a multidisciplinary approach that integrates legal studies, sociology, and anthropology, the study analyses key historical milestones, such as colonial legal legacies, post-independence constitutional developments, and ongoing debates surrounding federalism and ethnic rights. It also investigates how laws affect social cohesion and conflict among Nigeria's diverse ethnic groups, as well as the role of law in promoting inclusivity and recognizing minority rights. Case studies are utilized to illustrate practical examples of legal transformations and their impact on collective identities in various Nigerian contexts, including land rights, religious freedoms, and ethnic representation in government. The findings reveal that while the law has the potential to unify disparate groups under a national identity, it can also exacerbate divisions when applied inequitably or favouring particular groups over others. Ultimately, this study aims to shed light on the dual nature of law as both a tool for transformation and a potential source of conflict in the evolution of collective identities in Nigeria. By understanding these dynamics, policymakers and legal practitioners can develop strategies to foster unity and respect for diversity in a complex societal landscape.Keywords: law, collective identity, Nigeria, ethnicity, conflict, inclusion, legal framework, transformation
Procedia PDF Downloads 28392 Gender Differences in the Impact and Subjective Interpretation of Childhood Sexual Abuse Survivors
Authors: T. Borja-Alvarez, V. Jiménez-Borja, M. Jiménez Borja, C. J. Jiménez-Mosquera
Abstract:
Research on child sexual abuse has predominantly focused on female survivors. This has resulted in less research looking at the particular context in which this abuse takes place for boys and the impact this abuse may have on male survivors. The aim of this study is to examine the sex and age of the perpetrators of child sexual abuse and explore gender differences in the impact along with the subjective interpretation that survivors attribute to these experiences. The data for this study was obtained from Ecuadorian university students (M = 230, F = 293) who reported sexual abuse using the ISPCAN Child Abuse Screening Tool Retrospective version (ICAST-R). Participants completed Horowitz's Impact of Event Scale (IES) and were also requested to choose among neutral, positive, and negative adjectives to describe these experiences. The results indicate that in the case of males, perpetrators were both males (adults =27%, peers =20%, relatives =10.3%, cousins =7.4%) and young females (girlfriends or ex-girlfriends =25.6%, neighborhood =20.7%, school =16.7%, cousins =15.3%, strangers =12.8%). In contrast, almost all females reported that adult males were the perpetrators (relatives =29.6%, neighborhood =11.9%, strangers =19.9%, family friends =9.7%). Regarding the impact of these events, significant gender differences emerged. More females (50%) than males (20%) presented symptoms of post-traumatic stress disorder (PTSD). Gender differences also surfaced in the way survivors interpret their experiences. Almost half of the male participants selected the word “consensual” followed by the words “normal”, “helped me to mature”, “shameful”, “confusing”, and “traumatic”. In contrast, almost all females chose the word “non-consensual” followed by the words “shameful”, “traumatic”, “scary”, and “confusing”. In conclusion, the findings of this study suggest that young females and adult males were the most common perpetrators of sexually abused boys whereas adult males were the most common perpetrators of sexually abused girls. The impact and the subjective interpretation of these experiences were more negative for girls than for boys. The factors that account for the gender differences in the impact and the interpretation of these experiences need further exploration. It is likely that the cultural expectations of sexual behaviors for boys and girls in Latin American societies may partially explain the differential impact in the way these childhood sexual abuse experiences are interpreted in adulthood. In Ecuador, as is the case in other Latin American countries, the machismo culture not only accepts but encourages early sexual behaviors in boys and negatively judges premature sexual behavior in females. The result of these different sexual expectations may be that sexually abused boys may re-define these experiences as “consensual” and “normal” in adulthood, even though these were not consensual at the time of occurrence. Future studies are needed to more deeply understand the different contexts of sexual abuse for boys and girls in order to analyze the long-term impact of these experiences.Keywords: abuse, child, gender differences, sexual
Procedia PDF Downloads 104391 Digital Advance Care Planning and Directives: Early Observations of Adoption Statistics and Responses from an All-Digital Consumer-Driven Approach
Authors: Robert L. Fine, Zhiyong Yang, Christy Spivey, Bonnie Boardman, Maureen Courtney
Abstract:
Importance: Barriers to traditional advance care planning (ACP) and advance directive (AD) creation have limited the promise of ACP/AD for individuals and families, the healthcare team, and society. Reengineering ACP by using a web-based, consumer-driven process has recently been suggested. We report early experience with such a process. Objective: Begin to analyze the potential of the creation and use of ACP/ADs as generated by a consumer-friendly, digital process by 1) assessing the likelihood that consumers would create ACP/ADs without structured intervention by medical or legal professionals, and 2) analyzing the responses to determine if the plans can help doctors better understand a person’s goals, preferences, and priorities for their medical treatments and the naming of healthcare agents. Design: The authors chose 900 users of MyDirectives.com, a digital ACP/AD tool, solely based on their state of residence in order to achieve proportional representation of all 50 states by population size and then reviewed their responses, summarizing these through descriptive statistics including treatment preferences, demographics, and revision of preferences. Setting: General United States population. Participants: The 900 participants had an average age of 50.8 years (SD = 16.6); 84.3% of the men and 91% of the women were in self-reported good health when signing their ADs. Main measures: Preferences regarding the use of life-sustaining treatments, where to spend final days, consulting a supportive and palliative care team, attempted cardiopulmonary resuscitation (CPR), autopsy, and organ and tissue donation. Results: Nearly 85% of respondents prefer cessation of life-sustaining treatments during their final days whenever those may be, 76% prefer to spend their final days at home or in a hospice facility, and 94% wanted their future doctors to consult a supportive and palliative care team. 70% would accept attempted CPR in certain limited circumstances. Most respondents would want an autopsy under certain conditions, and 62% would like to donate their organs. Conclusions and relevance: Analysis of early experience with an all-digital web-based ACP/AD platform demonstrates that individuals from a wide range of ages and conditions can engage in an interrogatory process about values, goals, preferences, and priorities for their medical treatments by developing advance directives and easily make changes to the AD created. Online creation, storage, and retrieval of advance directives has the potential to remove barriers to ACP/AD and, thus, to further improve patient-centered end-of-life care.Keywords: Advance Care Plan, Advance Decisions, Advance Directives, Consumer; Digital, End of Life Care, Goals, Living Wills, Prefences, Universal Advance Directive, Statements
Procedia PDF Downloads 327390 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology
Authors: Hemendra Singh Rathod
Abstract:
Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.Keywords: frequency control, grid stability, li-ion battery storage, smart grid
Procedia PDF Downloads 152389 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets using an OpenScience Energy System Optimization Model
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is be clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results is ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 75388 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti
Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms
Abstract:
Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing
Procedia PDF Downloads 125387 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region
Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho
Abstract:
The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon
Procedia PDF Downloads 68386 A Clinical Audit on Screening Women with Subfertility Using Transvaginal Scan and Hysterosalpingo Contrast Sonography
Authors: Aarti M. Shetty, Estela Davoodi, Subrata Gangooly, Anita Rao-Coppisetty
Abstract:
Background: Testing Patency of Fallopian Tubes is among one of the several protocols for investigating Subfertile Couples. Both, Hysterosalpingogram (HSG) and Laparoscopy and dye test have been used as Tubal patency test for several years, with well-known limitation. Hysterosalpingo Contrast Sonography (HyCoSy) can be used as an alternative tool to HSG, to screen patency of Fallopian tubes, with an advantage of being non-ionising, and also, use of transvaginal scan to diagnose pelvic pathology. Aim: To determine the indication and analyse the performance of transvaginal scan and HyCoSy in Broomfield Hospital. Methods: We retrospectively analysed fertility workup of 282 women, who attended HyCoSy clinic at our institution from January 2015 to June 2016. An Audit proforma was designed, to aid data collection. Data was collected from patient notes and electronic records, which included patient demographics; age, parity, type of subfertility (primary or secondary), duration of subfertility, past medical history and base line investigation (hormone profile and semen analysis). Findings of the transvaginal scan, HyCoSy and Laparoscopy were also noted. Results: The most common indication for referral were as a part of primary fertility workup on couples who had failure to conceive despite intercourse for a year, other indication for referral were recurrent miscarriage, history of ectopic pregnancy, post reversal of sterilization(vasectomy and tuboplasty), Post Gynaecology surgery(Loop excision, cone biopsy) and amenorrhea. Basic Fertility workup showed 34% men had abnormal semen analysis. HyCoSy was successfully completed in 270 (95%) women using ExEm foam and Transvaginal Scan. Of the 270 patients, 535 tubes were examined in total. 495/535 (93%) tubes were reported as patent, 40/535 (7.5%) tubes were reported as blocked. A total of 17 (6.3%) patients required laparoscopy and dye test after HyCoSy. In these 17 patients, 32 tubes were examined under laparoscopy, and 21 tubes had findings similar to HyCoSy, with a concordance rate of 65%. In addition to this, 41 patients had some form of pelvic pathology (endometrial polyp, fibroid, cervical polyp, fibroid, bicornuate uterus) detected during transvaginal scan, who referred to corrective surgeries after attending HyCoSy Clinic. Conclusion: Our audit shows that HyCoSy and Transvaginal scan can be a reliable screening test for low risk women. Furthermore, it has competitive diagnostic accuracy to HSG in identifying tubal patency, with an additional advantage of screening for pelvic pathology. With addition of 3D Scan, pulse Doppler and other non-invasive imaging modality, HyCoSy may potentially replace Laparoscopy and chromopertubation in near future.Keywords: hysterosalpingo contrast sonography (HyCoSy), transvaginal scan, tubal infertility, tubal patency test
Procedia PDF Downloads 251385 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification
Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti
Abstract:
Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.Keywords: fluvial auto-classification concept, mapping, geomorphology, river
Procedia PDF Downloads 367384 The Digital Divide: Examining the Use and Access to E-Health Based Technologies by Millennials and Older Adults
Authors: Delana Theiventhiran, Wally J. Bartfay
Abstract:
Background and Significance: As the Internet is becoming the epitome of modern communications, there are many pragmatic reasons why the digital divide matters in terms of accessing and using E-health based technologies. With the rise of technology usage globally, those in the older adult generation may not be as familiar and comfortable with technology usage and are thus put at a disadvantage compared to other generations such as millennials when examining and using E-health based platforms and technology. Currently, little is known about how older adults and millennials access and use e-health based technologies. Methods: A systemic review of the literature was undertaken employing the following three databases: (i) PubMed, (ii) ERIC, and (iii) CINAHL; employing the search term 'digital divide and generations' to identify potential articles. To extract required data from the studies, a data abstraction tool was created to obtain the following information: (a) author, (b) year of publication, (c) sample size, (d) country of origin, (e) design/methods, (f) major findings/outcomes obtained. Inclusion criteria included publication dates between the years of Jan 2009 to Aug 2018, written in the English language, target populations of older adults aged 65 and above and millennials, and peer reviewed quantitative studies only. Major Findings: PubMed provided 505 potential articles, where 23 of those articles met the inclusion criteria. Specifically, ERIC provided 53 potential articles, where no articles met criteria following data extraction. CINAHL provided 14 potential articles, where eight articles met criteria following data extraction. Conclusion: Practically speaking, identifying how newer E-health based technologies can be integrated into society and identifying why there is a gap with digital technology will help reduce the impact on generations and individuals who are not as familiar with technology and Internet usage. The largest concern of all is how to prepare older adults for new and emerging E-health technologies. Currently, there is a dearth of literature in this area because it is a newer area of research and little is known about it. The benefits and consequences of technology being integrated into daily living are being investigated as a newer area of research. Several of the articles (N=11) indicated that age is one of the larger factors contributing to the digital divide. Similarly, many of the examined articles (N=5) identify that privacy concerns were one of the main deterrents of technology usage for elderly individuals aged 65 and above. The older adult generation feels that privacy is one of the major concerns, especially in regards to how data is collected, used and possibly sold to third party groups by various websites. Additionally, access to technology, the Internet, and infrastructure also plays a large part in the way that individuals are able to receive and use information. Lastly, a change in the way that healthcare is currently used, received and distributed would also help attribute to the change to ensure that no generation is left behind in a technologically advanced society.Keywords: digital divide, e-health, millennials, older adults
Procedia PDF Downloads 172383 Revolution through Rhythm: Anti Caste and Subaltern Dimensions in Indian Rap
Authors: Nithin Raj Adithyan
Abstract:
Rap music is a popular genre that features strong beats and rhythmic words. It was created by American disc jockeys and urban Black performers in the late 1970s. Additionally, it carries on West African oral traditions that were introduced to the Americas by Africans who were held as slaves and have impacted the narrative and rhythmic components of rap. Initially introduced in India in the late 1990s as mere entertainment, rap lacked the politicized undertones that it developed in the United States. However, recent years have witnessed a transformation, with Indian rap evolving into a vital tool for marginalized communities—particularly Dalits, Muslims, and tribal groups—to voice grievances against historical injustices, systemic discrimination, and caste-based oppression. This paper examines Indian rap's evolution into a potent medium for subaltern resistance, drawing from its origins in the black ghettos of America, where rap emerged as social commentary and an anti-racial political voice. Historically, music has served as an essential medium for subaltern groups in India to assert their identities and reclaim agency. Indian rap, in its current form, amplifies this function by offering a compelling platform to address issues of caste oppression, socio-economic marginalization, and symbolic exclusion. This paper examines how contemporary Indian rappers, often from Dalit and lower-caste backgrounds, leverage their art to confront systemic injustices and amplify voices that have historically been silenced. By analyzing key artists and their lyrics, this paper highlights the ways in which rap critiques the pervasive caste system, challenges social hierarchies, and fosters a sense of identity and solidarity among subaltern groups. This study uses Gayatri Spivak’s concept of “strategic essentialism” to explore how Indian rap fosters shared subaltern identity, uniting voices across regional and cultural divides. By situating Indian rap within the global hip-hop movement, the paper highlights how it contributes a unique perspective to global narratives of resilience and resistance, adapting international influences to articulate local struggles. Ultimately, this research highlights Indian rap’s role as a catalyst for change, examining its critique of caste violence, economic marginalization, and social exclusion and demonstrating how it contributes to the anti-caste movement. Through a close reading of this subaltern dimension of rap, the paper illuminates how Indian rap fosters identity, solidarity, and resistance, affirming the genre’s potential as a transformative force within the global legacy of hip-hop as an expression of subaltern agency and social dissent.Keywords: caste oppression, hip-hop/rap, resistance, subaltern
Procedia PDF Downloads 22382 A Sustainability Benchmarking Framework Based on the Life Cycle Sustainability Assessment: The Case of the Italian Ceramic District
Authors: A. M. Ferrari, L. Volpi, M. Pini, C. Siligardi, F. E. Garcia Muina, D. Settembre Blundo
Abstract:
A long tradition in the ceramic manufacturing since the 18th century, primarily due to the availability of raw materials and an efficient transport system, let to the birth and development of the Italian ceramic tiles district that nowadays represents a reference point for this sector even at global level. This economic growth has been coupled to attention towards environmental sustainability issues throughout various initiatives undertaken over the years at the level of the production sector, such as certification activities and sustainability policies. In this way, starting from an evaluation of the sustainability in all its aspects, the present work aims to develop a benchmarking helping both producers and consumers. In the present study, throughout the Life Cycle Sustainability Assessment (LCSA) framework, the sustainability has been assessed in all its dimensions: environmental with the Life Cycle Assessment (LCA), economic with the Life Cycle Costing (LCC) and social with the Social Life Cycle Assessment (S-LCA). The annual district production of stoneware tiles during the 2016 reference year has been taken as reference flow for all the three assessments, and the system boundaries cover the entire life cycle of the tiles, except for the LCC for which only the production costs have been considered at the moment. In addition, a preliminary method for the evaluation of local and indoor emissions has been introduced in order to assess the impact due to atmospheric emissions on both people living in the area surrounding the factories and workers. The Life Cycle Assessment results, obtained from IMPACT 2002+ modified assessment method, highlight that the manufacturing process is responsible for the main impact, especially because of atmospheric emissions at a local scale, followed by the distribution to end users, the installation and the ordinary maintenance of the tiles. With regard to the economic evaluation, both the internal and external costs have been considered. For the LCC, primary data from the analysis of the financial statements of Italian ceramic companies show that the higher cost items refer to expenses for goods and services and costs of human resources. The analysis of externalities with the EPS 2015dx method attributes the main damages to the distribution and installation of the tiles. The social dimension has been investigated with a preliminary approach by using the Social Hotspots Database, and the results indicate that the most affected damage categories are health and safety and labor rights and decent work. This study shows the potential of the LCSA framework applied to an industrial sector; in particular, it can be a useful tool for building a comprehensive benchmark for the sustainability of the ceramic industry, and it can help companies to actively integrate sustainability principles into their business models.Keywords: benchmarking, Italian ceramic industry, life cycle sustainability assessment, porcelain stoneware tiles
Procedia PDF Downloads 128