Search results for: Computational Fluid Dynamics
1158 The Correlation between Clostridium Difficile Infection and Bronchial Lung Cancer Occurrence
Authors: Molnar Catalina, Lexi Frankel, Amalia Ardeljan, Enoch Kim, Marissa Dallara, Omar Rashid
Abstract:
Introduction: Clostridium difficile (C. diff) is a toxin-producing bacteria that can cause diarrhea and colitis. U.S. Center for Disease Control and Prevention revealed that C. difficile infection (CDI) has increased from 31 cases per 100,000 persons per year in 1996 to 61 per 100,000 in 2003. Approximately 500,000 cases per year occur in the United States. After exposure, the bacteria colonize the colon, where it adheres to the intestinal epithelium where it produces two toxins: TcdA and TcdB. TcdA affects the intestinal epithelium, causing fluid secretion, inflammation, and tissue necrosis, while TcdB acts as a cytotoxin purpose of this study was to evaluate the association between C diff infection and bronchial lung cancer development. Methods: Using ICD- 9 and ICD-10 codes, the data was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to assess the patients infected with C diff as opposed to the non-infected patients. The Holy Cross Health, Fort Lauderdale, granted access to the database for the purpose of academic research. Patients were matched for age and Charlson Comorbidity Index (CCI). Standard statistical methods were used. Results: Bronchial lung cancer occurrence in the population not infected with C diff infection was 4741, as opposed to the population infected with C. diff, where 2039 cases of lung cancer were observed. The difference was statistically significant (p-value < 2.2x10^e-16), which reveals that C diff might be protective against bronchial lung cancer. The data was then matched by treatment to create to minimize the effect of treatment bias. Bronchial cancer incidence was 422 and 861 in infected vs. non-infected (p-value of < 2.2x10^e-16), which once more indicates that C diff infection could be beneficial in diminishing bronchial cancer development. Conclusion: This retrospective study conveys a statistical correlation between C diff infection and decreased incidence of lung bronchial cancer. Further studies are needed to comprehend the protective mechanisms of C. Diff infection on lung cancer.Keywords: C. diff, lung cancer, protective, microbiology
Procedia PDF Downloads 2361157 The Lasting Impact of Parental Conflict on Self-Differentiation of Young Adult OffspringThe Lasting Impact of Parental Conflict on Self-Differentiation of Young Adult Offspring
Authors: A. Benedetto, P. Wong, N. Papouchis, L. W. Samstag
Abstract:
Bowen’s concept of self-differentiation describes a healthy balance of autonomy and intimacy in close relationships, and it has been widely researched in the context of family dynamics. The current study aimed to clarify the impact of family dysfunction on self-differentiation by specifically examining conflict between parents, and by including young adults, an underexamined age group in this domain (N = 300; ages 18 to 30). It also identified a protective factor for offspring from conflictual homes. The 300 young adults (recruited online through Mechanical Turk) completed the Differentiation of Self Inventory (DSI), the Children’s Perception of Interparental Conflict Scale (CPIC), the Parental Bonding Instrument (PBI), and the Symptom Checklist-90-Revised (SCL-90-R). Analyses revealed that interparental conflict significantly impairs self-differentiation among young adult offspring. Specifically, exposure to parental conflict showed a negative impact on young adults’ sense of self, emotional reactivity, and interpersonal cutoff in the context of close relationships. Parental conflict was also related to increased psychological distress among offspring. Surprisingly, the study found that parental divorce does not impair self-differentiation in offspring, demonstrating the distinctly harmful impact of conflict. These results clarify a unique type of family dysfunction that impairs self-differentiation, specifically in distinguishing it from parental divorce; it examines young adults, a critical age group not previously examined in this domain; and it identifies a moderating protective factor (a strong parent-child bond) for offspring exposed to conflict. Overall, results suggest the need for modifications in parental behavior in order to protect offspring at risk of lasting emotional and interpersonal damage.Keywords: divorce, family dysfunction, parental conflict, parent-child bond, relationships, self-differentiation, young adults
Procedia PDF Downloads 1571156 The Impact of Total Parenteral Nutrition on Pediatric Stem Cell Transplantation and Its Complications
Authors: R. Alramyan, S. Alsalamah, R. Alrashed, R. Alakel, F. Altheyeb, M. Alessa
Abstract:
Background: Nutritional support with total parenteral nutrition (TPN) is usually commenced with hematopoietic stem cell transplantation (HSCT) patients. However, it has its benefits and risks. Complications related to central venous catheter such as infections, and metabolic disturbances, including abnormal liver function, is usually of concern in such patients. Methods: A retrospective charts review of all pediatric patients who underwent HSCT between the period 2015-2018 in a tertiary hospital in Riyadh, Saudi Arabia. Patients' demographics, types of conditioning, type of nutrition, and patients' outcomes were collected. Statistical analysis was conducted using SPSS version 22. Frequencies and percentages were used to describe categorical variables. Mean, and standard deviation were used for continuous variables. A P value of less than 0.05 was considered as statically significant. Results: a total of 162 HSCTs were identified during the period mentioned. Indication of allogenic transplant included hemoglobinopathy in 50 patients (31%), acute lymphoblastic leukemia in 21 patients (13%). TPN was used in 96 patients (59.30%) for a median of 14 days, nasogastric tube feeding (NGT) in 16 (9.90%) patients for a median of 11 days, and 71 of patients (43.80%) were able to tolerate oral feeding. Out of the 96 patients (59.30%) who were dependent on TPN, 64 patients (66.7%) had severe mucositis in comparison to 17 patients (25.8%) who were either on NGT or tolerated oral intake. (P-value= 0.00). Sinusoidal obstruction syndrome (SOS) was seen in 14 patients (14.6%) who were receiving TPN compared to none in non-TPN patients (P=value 0.001). Moreover, majority of patients who had SOS received myeloablative conditioning therapy for non-malignant disease (hemoglobinopathy). However, there were no statistically significant differences in Graft-vs-Host Disease (both acute and chronic), bacteremia, and patient outcome between both groups. Conclusions: Nutritional support using TPN is used in majority of patients, especially post-myeloablative conditioning associated with severe mucositis. TPN was associated with VOD, especially in hemoglobinopathy patients who received myeloablative therapy. This may emphasize on use of preventative measures such as fluid restriction, use of diuretics, or defibrotide in high-risk patients.Keywords: hematopoeitic stem cell transplant, HSCT, stem cell transplant, sinusoidal obstruction syndrome, total parenteral nutrition
Procedia PDF Downloads 1591155 Practice and Understanding of Fracturing Renovation for Risk Exploration Wells in Xujiahe Formation Tight Sandstone Gas Reservoir
Authors: Fengxia Li, Lufeng Zhang, Haibo Wang
Abstract:
The tight sandstone gas reservoir in the Xujiahe Formation of the Sichuan Basin has huge reserves, but its utilization rate is low. Fracturing and stimulation are indispensable technologies to unlock their potential and achieve commercial exploitation. Slickwater is the most widely used fracturing fluid system in the fracturing and renovation of tight reservoirs. However, its viscosity is low, its sand-carrying performance is poor, and the risk of sand blockage is high. Increasing the sand carrying capacity by increasing the displacement will increase the frictional resistance of the pipe string, affecting the resistance reduction performance. The variable viscosity slickwater can flexibly switch between different viscosities in real-time online, effectively overcoming problems such as sand carrying and resistance reduction. Based on a self-developed indoor loop friction testing system, a visualization device for proppant transport, and a HAAKE MARS III rheometer, a comprehensive evaluation was conducted on the performance of variable viscosity slickwater, including resistance reduction, rheology, and sand carrying. The indoor experimental results show that: 1. by changing the concentration of drag-reducing agents, the viscosity of the slippery water can be changed between 2~30mPa. s; 2. the drag reduction rate of the variable viscosity slickwater is above 80%, and the shear rate will not reduce the drag reduction rate of the liquid; under indoor experimental conditions, 15mPa. s of variable viscosity and slickwater can basically achieve effective carrying and uniform placement of proppant. The layered fracturing effect of the JiangX well in the dense sandstone of the Xujiahe Formation shows that the drag reduction rate of the variable viscosity slickwater is 80.42%, and the daily production of the single layer after fracturing is over 50000 cubic meters. This study provides theoretical support and on-site experience for promoting the application of variable viscosity slickwater in tight sandstone gas reservoirs.Keywords: slickwater, hydraulic fracturing, dynamic sand laying, drag reduction rate, rheological properties
Procedia PDF Downloads 761154 Optimizing Foaming Agents by Air Compression to Unload a Liquid Loaded Gas Well
Authors: Mhenga Agneta, Li Zhaomin, Zhang Chao
Abstract:
When velocity is high enough, gas can entrain fluid and carry to the surface, but as time passes by, velocity drops to a critical point where fluids will start to hold up in the tubing and cause liquid loading which prevents gas production and may lead to the death of the well. Foam injection is widely used as one of the methods to unload liquid. Since wells have different characteristics, it is not guaranteed that foam can be applied in all of them and bring successful results. This research presents a technology to optimize the efficiency of foam to unload liquid by air compression. Two methods are used to explain optimization; (i) mathematical formulas are used to solve and explain the myth of how density and critical velocity could be minimized when air is compressed into foaming agents, then the relationship between flow rates and pressure increase which would boost up the bottom hole pressure and increase the velocity to lift liquid to the surface. (ii) Experiments to test foam carryover capacity and stability as a function of time and surfactant concentration whereby three surfactants anionic sodium dodecyl sulfate (SDS), nonionic Triton 100 and cationic hexadecyltrimethylammonium bromide (HDTAB) were probed. The best foaming agents were injected to lift liquid loaded in a created vertical well model of 2.5 cm diameter and 390 cm high steel tubing covered by a transparent glass casing of 5 cm diameter and 450 cm high. The results show that, after injecting foaming agents, liquid unloading was successful by 75%; however, the efficiency of foaming agents to unload liquid increased by 10% with an addition of compressed air at a ratio of 1:1. Measured values and calculated values were compared and brought about ± 3% difference which is a good number. The successful application of the technology indicates that engineers and stakeholders could bring water flooded gas wells back to production with optimized results by firstly paying attention to the type of surfactants (foaming agents) used, concentration of surfactants, flow rates of the injected surfactants then compressing air to the foaming agents at a proper ratio.Keywords: air compression, foaming agents, gas well, liquid loading
Procedia PDF Downloads 1351153 Body Shaming and Its Psychological Consequences: A Comprehensive Analysis
Authors: Aryan Sood, Shruti Pathak, Dipanshu Chaudhary, Shreyanshi, Yogesh Pal
Abstract:
In this comprehensive meta-analysis, the study delves into the widespread issue of body shaming, revealing its pervasive impact on various aspects of human life and its profound implications for mental health. The paper first explores the origins of body shaming, including societal norms, media influences, and interpersonal dynamics. It highlights the various forms it takes and its detrimental effects on self-esteem, body image, and psychological well-being. Particularly among adolescents and teenagers in today's social media-driven world, the pressure to conform to idealized beauty standards is significant, leading to negative consequences for their development and health. The research emphasizes the long-lasting mental health effects of body shaming, including depression, body dysmorphia, low self-esteem, and eating disorders. The study also discusses the emergence of body positivity movements as a means to challenge societal norms and promote inclusivity and empathy. Furthermore, the research addresses body shaming in the workplace and presents strategies to combat it, stressing the importance of awareness campaigns, education, and policy changes. In conclusion, the study underscores the critical need for a culture of acceptance and support, the promotion of positive body image, and efforts to mitigate the severe mental health toll that body shaming takes on individuals and communities. Overall, this research provides a comprehensive overview of body shaming, its root causes, and its far-reaching impacts on mental health and well-being. It highlights the urgency of addressing this issue in various contexts, from adolescence to the workplace, and offers solutions, such as awareness campaigns and societal changes, to foster a more inclusive and empathetic future.Keywords: body shaming, mental health, age, gender, societal norms, appearance-based discrimination, cyberbullying, self-esteem, social media, depression, acceptance
Procedia PDF Downloads 691152 Low Complexity Carrier Frequency Offset Estimation for Cooperative Orthogonal Frequency Division Multiplexing Communication Systems without Cyclic Prefix
Authors: Tsui-Tsai Lin
Abstract:
Cooperative orthogonal frequency division multiplexing (OFDM) transmission, which possesses the advantages of better connectivity, expanded coverage, and resistance to frequency selective fading, has been a more powerful solution for the physical layer in wireless communications. However, such a hybrid scheme suffers from the carrier frequency offset (CFO) effects inherited from the OFDM-based systems, which lead to a significant degradation in performance. In addition, insertion of a cyclic prefix (CP) at each symbol block head for combating inter-symbol interference will lead to a reduction in spectral efficiency. The design on the CFO estimation for the cooperative OFDM system without CP is a suspended problem. This motivates us to develop a low complexity CFO estimator for the cooperative OFDM decode-and-forward (DF) communication system without CP over the multipath fading channel. Especially, using a block-type pilot, the CFO estimation is first derived in accordance with the least square criterion. A reliable performance can be obtained through an exhaustive two-dimensional (2D) search with a penalty of heavy computational complexity. As a remedy, an alternative solution realized with an iteration approach is proposed for the CFO estimation. In contrast to the 2D-search estimator, the iterative method enjoys the advantage of the substantially reduced implementation complexity without sacrificing the estimate performance. Computer simulations have been presented to demonstrate the efficacy of the proposed CFO estimation.Keywords: cooperative transmission, orthogonal frequency division multiplexing (OFDM), carrier frequency offset, iteration
Procedia PDF Downloads 2681151 A Non-Linear Eddy Viscosity Model for Turbulent Natural Convection in Geophysical Flows
Authors: J. P. Panda, K. Sasmal, H. V. Warrior
Abstract:
Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.Keywords: Eddy viscosity, turbulence modeling, GOTM, CFD
Procedia PDF Downloads 2021150 Adsorption and Selective Determination Ametryne in Food Sample Using of Magnetically Separable Molecular Imprinted Polymers
Authors: Sajjad Hussain, Sabir Khan, Maria Del Pilar Taboada Sotomayor
Abstract:
This work demonstrates the synthesis of magnetic molecularly imprinted polymers (MMIPs) for determination of a selected pesticide (ametryne) using high performance liquid chromatography (HPLC). Computational simulation can assist the choice of the most suitable monomer for the synthesis of polymers. The (MMIPs) were polymerized at the surface of Fe3O4@SiO2 magnetic nanoparticles (MNPs) using 2-vinylpyradine as functional monomer, ethylene-glycol-dimethacrylate (EGDMA) is a cross-linking agent and 2,2-Azobisisobutyronitrile (AIBN) used as radical initiator. Magnetic non-molecularly imprinted polymer (MNIPs) was also prepared under the same conditions without analyte. The MMIPs were characterized by scanning electron microscopy (SEM), Brunauer, Emmett and Teller (BET) and Fourier transform infrared spectroscopy (FTIR). Pseudo first order and pseudo second order model were applied to study kinetics of adsorption and it was found that adsorption process followed the pseudo first order kinetic model. Adsorption equilibrium data was fitted to Freundlich and Langmuir isotherms and the sorption equilibrium process was well described by Langmuir isotherm mode. The selectivity coefficients (α) of MMIPs for ametryne with respect to atrazine, ciprofloxacin and folic acid were 4.28, 12.32, and 14.53 respectively. The spiked recoveries ranged between 91.33 and 106.80% were obtained. The results showed high affinity and selectivity of MMIPs for pesticide ametryne in the food samples.Keywords: molecularly imprinted polymer, pesticides, magnetic nanoparticles, adsorption
Procedia PDF Downloads 4861149 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 361148 Teacher Education: Teacher Development and Support
Authors: Khadem Hichem
Abstract:
With the new technology challenges, dynamics and challenges of the contemporary world, most teachers are struggling to maintain effective and successful teaching /learning environment for learners. Teachers as a key to the success of reforms in the educational setting, they must improve their competencies to teach effectively. Many researchers emphasis on the ongoing professional development of the teacher by enhancing their experiences and encouraging their responsibility for learning, and thus promoting self-reliance, collaboration, and reflection. In short, teachers are considered as learners and they need to learn together. The educational system must support, both conceptually and financially, the teachers’ development as lifelong learners Teachers need opportunities to grow in language proficiency and in knowledge. Changing nature of language and culture in the world, all teachers must have opportunities to update their knowledge and practices. Many researchers in the field of foreign or additional languages indicate that teachers keep side by side of effective instructional practices and they need special support with the challenging task of developing and administering proficiency tests to their students. For significant change to occur, each individual teacher’s needs must be addressed. The teacher must be involved experientially in the process of development, since, by itself, knowledge of how to change does not mean change will be initiated. For improvement to occur, new skills have to be guided, practiced, and reflected upon in collaboration with colleagues. Clearly, teachers are at different places developmentally; therefore, allowances for various entry levels and individual differences need to be built into the professional development structure. Objectives must be meaningful to the participant and teacher improvement must be stated terms of student knowledge, student performance, and motivation. The most successful professional development process acknowledges the student-centered nature of good teaching. This paper highlights the importance of teacher professional development process and institutional supports as way to enhance good teaching and learning environment.Keywords: teacher professional development, teacher competencies, institutional support, teacher education
Procedia PDF Downloads 3541147 Enhancing Sewage Sludge Management through Integrated Hydrothermal Liquefaction and Anaerobic Digestion: A Comparative Study
Authors: Harveen Kaur Tatla, Parisa Niknejad, Rajender Gupta, Bipro Ranjan Dhar, Mohd. Adana Khan
Abstract:
Sewage sludge management presents a pressing challenge in the realm of wastewater treatment, calling for sustainable and efficient solutions. This study explores the integration of Hydrothermal Liquefaction (HTL) and Anaerobic Digestion (AD) as a promising approach to address the complexities associated with sewage sludge treatment. The integration of these two processes offers a complementary and synergistic framework, allowing for the mitigation of inherent limitations, thereby enhancing overall efficiency, product quality, and the comprehensive utilization of sewage sludge. In this research, we investigate the optimal sequencing of HTL and AD within the treatment framework, aiming to discern which sequence, whether HTL followed by AD or AD followed by HTL, yields superior results. We explore a range of HTL working temperatures, including 250°C, 300°C, and 350°C, coupled with residence times of 30 and 60 minutes. To evaluate the effectiveness of each sequence, a battery of tests is conducted on the resultant products, encompassing Total Ammonia Nitrogen (TAN), Chemical Oxygen Demand (COD), and Volatile Fatty Acids (VFA). Additionally, elemental analysis is employed to determine which sequence maximizes energy recovery. Our findings illuminate the intricate dynamics of HTL and AD integration for sewage sludge management, shedding light on the temperature-residence time interplay and its impact on treatment efficiency. This study not only contributes to the optimization of sewage sludge treatment but also underscores the potential of integrated processes in sustainable waste management strategies. The insights gleaned from this research hold promise for advancing the field of wastewater treatment and resource recovery, addressing critical environmental and energy challenges.Keywords: Anaerobic Digestion (AD), aqueous phase, energy recovery, Hydrothermal Liquefaction (HTL), sewage sludge management, sustainability.
Procedia PDF Downloads 831146 Sedimentary Response to Coastal Defense Works in São Vicente Bay, São Paulo
Authors: L. C. Ansanelli, P. Alfredini
Abstract:
The article presents the evaluation of the effectiveness of two groins located at Gonzaguinha and Milionários Beaches, situated on the southeast coast of Brazil. The effectiveness of these coastal defense structures is evaluated in terms of sedimentary dynamics, which is one of the most important environmental processes to be assessed in coastal engineering studies. The applied method is based on the implementation of the Delft3D numerical model system tools. Delft3D-WAVE module was used for waves modelling, Delft3D-FLOW for hydrodynamic modelling and Delft3D-SED for sediment transport modelling. The calibration of the models was carried out in a way that the simulations adequately represent the region studied, evaluating improvements in the model elements with the use of statistical comparisons of similarity between the results and waves, currents and tides data recorded in the study area. Analysis of the maximum wave heights was carried to select the months with higher accumulated energy to implement these conditions in the engineering scenarios. The engineering studies were performed for two scenarios: 1) numerical simulation of the area considering only the two existing groins; 2) conception of breakwaters coupled at the ends of the existing groins, resulting in two “T” shaped structures. The sediment model showed that, for the simulated period, the area is affected by erosive processes and that the existing groins have little effectiveness in defending the coast in question. The implemented T structures showed some effectiveness in protecting the beaches against erosion and provided the recovery of the portion directly covered by it on the Milionários Beach. In order to complement this study, it is suggested the conception of further engineering scenarios that might recover other areas of the studied region.Keywords: coastal engineering, coastal erosion, Sao Vicente bay, Delft3D, coastal engineering works
Procedia PDF Downloads 1271145 Evaluation of Residual Stresses in Human Face as a Function of Growth
Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan
Abstract:
Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.Keywords: finite element method, growth, residual stress, soft tissue
Procedia PDF Downloads 2711144 The Impact of City Mobility on Propagation of Infectious Diseases: Mathematical Modelling Approach
Authors: Asrat M.Belachew, Tiago Pereira, Institute of Mathematics, Computer Sciences, Avenida Trabalhador São Carlense, 400, São Carlos, 13566-590, Brazil
Abstract:
Infectious diseases are among the most prominent threats to human beings. They cause morbidity and mortality to an individual and collapse the social, economic, and political systems of the whole world collectively. Mathematical models are fundamental tools and provide a comprehensive understanding of how infectious diseases spread and designing the control strategy to mitigate infectious diseases from the host population. Modeling the spread of infectious diseases using a compartmental model of inhomogeneous populations is good in terms of complexity. However, in the real world, there is a situation that accounts for heterogeneity, such as ages, locations, and contact patterns of the population which are ignored in a homogeneous setting. In this work, we study how classical an SEIR infectious disease spreading of the compartmental model can be extended by incorporating the mobility of population between heterogeneous cities during an outbreak of infectious disease. We have formulated an SEIR multi-cities epidemic spreading model using a system of 4k ordinary differential equations to describe the disease transmission dynamics in k-cities during the day and night. We have shownthat the model is epidemiologically (i.e., variables have biological interpretation) and mathematically (i.e., a unique bounded solution exists all the time) well-posed. We constructed the next-generation matrix (NGM) for the model and calculated the basic reproduction number R0for SEIR-epidemic spreading model with cities mobility. R0of the disease depends on the spectral radius mobility operator, and it is a threshold between asymptotic stability of the disease-free equilibrium and disease persistence. Using the eigenvalue perturbation theorem, we showed that sending a fraction of the population between cities decreases the reproduction number of diseases in interconnected cities. As a result, disease transmissiondecreases in the population.Keywords: SEIR-model, mathematical model, city mobility, epidemic spreading
Procedia PDF Downloads 1091143 Convolutional Neural Networks-Optimized Text Recognition with Binary Embeddings for Arabic Expiry Date Recognition
Authors: Mohamed Lotfy, Ghada Soliman
Abstract:
Recognizing Arabic dot-matrix digits is a challenging problem due to the unique characteristics of dot-matrix fonts, such as irregular dot spacing and varying dot sizes. This paper presents an approach for recognizing Arabic digits printed in dot matrix format. The proposed model is based on Convolutional Neural Networks (CNN) that take the dot matrix as input and generate embeddings that are rounded to generate binary representations of the digits. The binary embeddings are then used to perform Optical Character Recognition (OCR) on the digit images. To overcome the challenge of the limited availability of dotted Arabic expiration date images, we developed a True Type Font (TTF) for generating synthetic images of Arabic dot-matrix characters. The model was trained on a synthetic dataset of 3287 images and 658 synthetic images for testing, representing realistic expiration dates from 2019 to 2027 in the format of yyyy/mm/dd. Our model achieved an accuracy of 98.94% on the expiry date recognition with Arabic dot matrix format using fewer parameters and less computational resources than traditional CNN-based models. By investigating and presenting our findings comprehensively, we aim to contribute substantially to the field of OCR and pave the way for advancements in Arabic dot-matrix character recognition. Our proposed approach is not limited to Arabic dot matrix digit recognition but can also be extended to text recognition tasks, such as text classification and sentiment analysis.Keywords: computer vision, pattern recognition, optical character recognition, deep learning
Procedia PDF Downloads 961142 Glacier Dynamics and Mass Fluctuations in Western Himalayas: A Comparative Analysis of Pir-Panjal and Greater Himalayan Ranges in Jhelum Basin, India
Authors: Syed Towseef Ahmad, Fatima Amin, Pritha Acharya, Anil K. Gupta, Pervez Ahmad
Abstract:
Glaciers being the sentinels of climate change, are the most visible evidence of global warming. Given the unavailability of observed field-based data, this study has focussed on the use of geospatial techniques to obtain information about the glaciers of Pir-Panjal (PPJ) and the Great Himalayan Regions of Jhelum Basin (GHR). These glaciers need to be monitored in line with the variations in climatic conditions because they significantly contribute to various sectors in the region. The main aim of this study is to map the glaciers in the two adjacent regions (PPJ and GHR) in the north-western Himalayas with different topographies and compare the changes in various glacial attributes during two different time periods (1990-2020). During the last three decades, both PPJ as well as GHR regions have observed deglaciation of around 36 and 26 percent, respectively. The mean elevation of GHR glaciers has increased from 4312 to 4390 masl, while the same for PPJ glaciers has increased from 4085 to 4124 masl during the observation period. Using accumulation area ratio (AAR) method, mean mass balance of -34.52 and -37.6 cm.w.e was recorded for the glaciers of GHR and PPJ, respectively. The difference in areal and mass loss of glaciers in these regions may be due to (i) the smaller size of PPJ glaciers which are all smaller than 1 km² and are thus more responsive to climate change (ii) Higher mean elevation of GHR glaciers (iii) local variations in climatic variables in these glaciated regions. Time series analysis of climate variables indicates that both the mean maximum and minimum temperatures of Qazigund station (Tmax= 19.2, Tmin= 6.4) are comparatively higher than the Pahalgam station (Tmax= 18.8, Tmin= 3.2). Except for precipitation in Qazigund (Slope= - 0.3 mm a⁻¹), each climatic parameter has shown an increasing trend during these three decades, and with the slope of 0.04 and 0.03°c a⁻¹, the positive trend in Tmin (pahalgam) and Tmax (qazigund) are observed to be statistically significant (p≤0.05).Keywords: glaciers, climate change, Pir-Panjal, greater Himalayas, mass balance
Procedia PDF Downloads 891141 A New Cytoprotective Drug on the Basis of Cytisine: Phase I Clinical Trial Results
Authors: B. Yermekbayeva, A. Gulyayaev, T. Nurgozhin, C. Bektur
Abstract:
Cytisine aminophosphonate under the name "Cytafat" was approved for clinical trials in Republic of Kazakhstan as a putative liver protecting drug for the treatment of acute toxic hepatitis. A method of conducting the clinical trial is a double blind study. Total number of patients -71, aged from 16 to 56 years. Research on healthy volunteers determined the maximal tolerable doze of "Cytafat" as 200 mg/kg. Side effects when administered at high dozes (100-200 mg/kg) are tachycardia and increase of arterial blood pressure. The drug is tested in the treatment of 28 patients with a syndrome of hepatocellular failure (a poisoning with substitutes of alcohol, rat poison, or medical products). "Cytafat" was intravenously administered at a dose of 10 mg/kg in 200 ml of 5 % glucose solution once daily. The number of administrations: 1-3. In the comparison group, 23 patients were treated intravenously once a day with “Essenciale H” at a dose of 10 ml. 20 patients received a placebo (10 ml of glucose intravenously). In all cases of toxic hepatopathology the significant positive clinical effect of the testing drug distinguishable from placebo and surpassing the alternative was observed. Within a day after administration a sharp reduction of cytolitic syndrome parameters (ALT, AST, alkaline phosphatase, thymol turbidity test, GGT) was registered, a reduction of the severity of cholestatic syndrome (bilirubin decreased) was recorded, significantly decreased indices of lipid peroxidation. The following day, in all cases the positive dynamics was determined with ultrasound study (reduction of diffuse changes and events of reactive pancreatitis), hepatomegaly disappeared. Normalization of all parameters occurred in 2-3 times faster, than when using the drug "Essenciale H" and placebo. Average term of elimination of toxic hepatopathy when using the drug "Cytafat" -2,8 days, "Essenciale H" -7,2 days, and placebo -10,6 days. The new drug "Cytafat" has expressed cytoprotective properties.Keywords: cytisine, cytoprotection, hepatopathy, hepatoprotection
Procedia PDF Downloads 3691140 Oral Grammatical Errors of Arabic as Second Language (ASL) Learners: An Applied Linguistic Approach
Authors: Sadeq Al Yaari, Fayza Al Hammadi, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari, Salah Al Yami
Abstract:
Background: When we further take Arabic grammatical issues into account in accordance with applied linguistic investigations on Arabic as Second Language (ASL) learners, a fundamental issue arises at this point as to the production of speech in Arabic: Oral grammatical errors committed by ASL learners. Aims: Using manual rating as well as computational analytic methodology to test a corpus of recorded speech by Second Language (ASL) learners of Arabic, this study aims to find the areas of difficulties in learning Arabic grammar. More specifically, it examines how and why ASL learners make grammatical errors in their oral speech. Methods: Tape recordings of four (4) Arabic as Second Language (ASL) learners who ranged in age from 23 to 30 were naturally collected. All participants have completed an intensive Arabic program (two years) and 20 minute-speech was recorded for each participant. Having the collected corpus, the next procedure was to rate them against Arabic standard grammar. The rating includes four processes: Description, analysis and assessment. Conclusions: Outcomes made from the issues addressed in this paper can be summarized in the fact that ASL learners face many grammatical difficulties when studying Arabic word order, tenses and aspects, function words, subject-verb agreement, verb form, active-passive voice, global and local errors, processes-based errors including addition, omission, substitution or a combination of any of them.Keywords: grammar, error, oral, Arabic, second language, learner, applied linguistics.
Procedia PDF Downloads 491139 Exploring Gender-Based Violence in Indigenous Communities in Argentina and Costa Rica: A Review of the Current Literature
Authors: Jocelyn Jones
Abstract:
The objective of this literature review is to provide an assessment of the current literature concerning gender-based violence (GBV) within indigenous communities in Argentina and Costa Rica, and various public intervention strategies that have been implemented to counter the increasing rates of violence within these populations. The review will address some of the unique challenges and contextual factors influencing the prevalence and response to such violence, including the enduring impact of colonialism on familial structures, community dynamics, and the perpetuation of violence. Drawing on indigenous feminist perspectives, the paper critically assesses the intersectionality of gender, ethnicity, and socio-economic status in shaping the experiences of indigenous women, men, and gender-diverse individuals. In comparing the two nations, the literature review identifies commonalities and divergences in policy frameworks, legal responses, and grassroots initiatives aimed at addressing GBV. Regarding the assessment of the efficacy of existing interventions, the paper will consider the role of cultural revitalization, community engagement, and collaborative efforts between indigenous communities and external agencies in the development of future policies. Moreover, the review will highlight the importance of decolonizing methodologies in research and intervention strategies, and the need to emphasise culturally sensitive approaches that respect and integrate indigenous worldviews and traditional knowledge systems. Additionally, the paper will explore the potential impact of colonial legacies, resource extraction, and land dispossession on exacerbating vulnerabilities to GBV within indigenous communities. The aim of this paper is to contribute to a more in-depth understanding of GBV in indigenous contexts in order to promote cross-cultural learning and inform future research. Ultimately, this review will demonstrate the necessity of adopting a holistic and context-specific approach to address gender-based violence in indigenous communities.Keywords: gender based violence, indigenous, colonialism, literature review
Procedia PDF Downloads 801138 Revealing the Urban Heat Island: Investigating its Spatial and Temporal Changes and Relationship with Air Quality
Authors: Aneesh Mathew, Arunab K. S., Atul Kumar Sharma
Abstract:
The uncontrolled rise in population has led to unplanned, swift, and unsustainable urban expansion, causing detrimental environmental impacts on both local and global ecosystems. This research delves into a comprehensive examination of the Urban Heat Island (UHI) phenomenon in Bengaluru and Hyderabad, India. It centers on the spatial and temporal distribution of UHI and its correlation with air pollutants. Conducted across summer and winter seasons from 2001 to 2021 in Bangalore and Hyderabad, this study discovered that UHI intensity varies seasonally, peaking in summer and decreasing in winter. The annual maximum UHI intensities range between 4.65 °C to 6.69 °C in Bengaluru and 5.74 °C to 6.82 °C in Hyderabad. Bengaluru particularly experiences notable fluctuations in average UHI intensity. Introducing the Urban Thermal Field Variance Index (UTFVI), the study indicates a consistent strong UHI effect in both cities, significantly impacting living conditions. Moreover, hotspot analysis demonstrates a rising trend in UHI-affected areas over the years in Bengaluru and Hyderabad. This research underscores the connection between air pollutant concentrations and land surface temperature (LST), highlighting the necessity of comprehending UHI dynamics for urban environmental management and public health. It contributes to a deeper understanding of UHI patterns in swiftly urbanizing areas, providing insights into the intricate relationship between urbanization, climate, and air quality. These findings serve as crucial guidance for policymakers, urban planners, and researchers, facilitating the development of innovative, sustainable strategies to mitigate the adverse impacts of uncontrolled expansion while promoting the well-being of local communities and the global environment.Keywords: urban heat island effect, land surface temperature, air pollution, urban thermal field variance index
Procedia PDF Downloads 821137 Recent Progress in Wave Rotor Combustion
Authors: Mohamed Razi Nalim, Shahrzad Ghadiri
Abstract:
With current concerns regarding global warming, demand for a society with greater environmental awareness significantly increases. With gradual development in hybrid and electric vehicles and the availability of renewable energy resources, increasing efficiency in fossil fuel and combustion engines seems a faster solution toward sustainability and reducing greenhouse gas emissions. This paper aims to provide a comprehensive review of recent progress in wave rotor combustor, one of the combustion concepts with considerable potential to improve power output and emission standards. A wave rotor is an oscillatory flow device that uses the unsteady gas dynamic concept to transfer energy by generating pressure waves. From a thermodynamic point of view, unlike conventional positive-displacement piston engines which follow the Brayton cycle, wave rotors offer higher cycle efficiency due to pressure gain during the combustion process based on the Humphrey cycle. First, the paper covers all recent and ongoing computational and experimental studies around the world with a quick look at the milestones in the history of wave rotor development. Second, the main similarity and differences in the ignition system of the wave rotor with piston engines are considered. Also, the comparison is made with another pressure gain device, rotating detonation engines. Next, the main challenges and research needs for wave rotor combustor commercialization are discussed.Keywords: wave rotor combustor, unsteady gas dynamic, pre-chamber jet ignition, pressure gain combustion, constant-volume combustion
Procedia PDF Downloads 851136 Practice Educators' Perspective: Placement Challenges in Social Work Education in England
Authors: Yuet Wah Echo Yeung
Abstract:
Practice learning is an important component of social work education. Practice educators are charged with the responsibility to support and enable learning while students are on placement. They also play a key role in teaching students to integrate theory and practice, as well as assessing their performance. Current literature highlights the structural factors that make it difficult for practice educators to create a positive learning environment for students. Practice educators find it difficult to give sufficient attention to their students because of the lack of workload relief, the increasing emphasis on managerialism and bureaucratisation, and a range of competing organisational and professional demands. This paper reports the challenges practice educators face and how they manage these challenges in this context. Semi-structured face-to-face interviews were conducted with thirteen practice educators who support students in statutory and voluntary social care settings in the Northwest of England. Interviews were conducted between April and July 2017 and each interview lasted about 40 minutes. All interviews were recorded and transcribed. All practice educators are experienced social work practitioners with practice experience ranging from 6 to 42 years. On average they have acted as practice educators for 13 years and all together have supported 386 students. Our findings reveal that apart from the structural factors that impact how practice educators perform their roles, they also faced other challenges when supporting students on placement. They include difficulty in engaging resistant students, complexity in managing power dynamics in the context of practice learning, and managing the dilemmas of fostering a positive relationship with students whilst giving critical feedback. Suggestions to enhance the practice educators’ role include support from organisations and social work teams; effective communication with university tutors, and a forum for practice educators to share good practice and discuss placement issues.Keywords: social work education, placement challenges, practice educator, practice learning
Procedia PDF Downloads 1921135 Design of Digital IIR Filter Using Opposition Learning and Artificial Bee Colony Algorithm
Authors: J. S. Dhillon, K. K. Dhaliwal
Abstract:
In almost all the digital filtering applications the digital infinite impulse response (IIR) filters are preferred over finite impulse response (FIR) filters because they provide much better performance, less computational cost and have smaller memory requirements for similar magnitude specifications. However, the digital IIR filters are generally multimodal with respect to the filter coefficients and therefore, reliable methods that can provide global optimal solutions are required. The artificial bee colony (ABC) algorithm is one such recently introduced meta-heuristic optimization algorithm. But in some cases it shows insufficiency while searching the solution space resulting in a weak exchange of information and hence is not able to return better solutions. To overcome this deficiency, the opposition based learning strategy is incorporated in ABC and hence a modified version called oppositional artificial bee colony (OABC) algorithm is proposed in this paper. Duplication of members is avoided during the run which also augments the exploration ability. The developed algorithm is then applied for the design of optimal and stable digital IIR filter structure where design of low-pass (LP) and high-pass (HP) filters is carried out. Fuzzy theory is applied to achieve maximize satisfaction of minimum magnitude error and stability constraints. To check the effectiveness of OABC, the results are compared with some well established filter design techniques and it is observed that in most cases OABC returns better or atleast comparable results.Keywords: digital infinite impulse response filter, artificial bee colony optimization, opposition based learning, digital filter design, multi-parameter optimization
Procedia PDF Downloads 4791134 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals
Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty
Abstract:
A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction
Procedia PDF Downloads 1161133 Hybrid Thresholding Lifting Dual Tree Complex Wavelet Transform with Wiener Filter for Quality Assurance of Medical Image
Authors: Hilal Naimi, Amelbahahouda Adamou-Mitiche, Lahcene Mitiche
Abstract:
The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. This research proposes a denoising approach basing on LDTCWT (Lifting Dual Tree Complex Wavelet Transform) using Hybrid Thresholding with Wiener filter to enhance the quality image. This research describes the LDTCWT as a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.Keywords: lifting wavelet transform, image denoising, dual tree complex wavelet transform, wavelet shrinkage, wiener filter
Procedia PDF Downloads 1631132 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach
Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma
Abstract:
Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX
Procedia PDF Downloads 1311131 Development and Investigation of Efficient Substrate Feeding and Dissolved Oxygen Control Algorithms for Scale-Up of Recombinant E. coli Cultivation Process
Authors: Vytautas Galvanauskas, Rimvydas Simutis, Donatas Levisauskas, Vykantas Grincas, Renaldas Urniezius
Abstract:
The paper deals with model-based development and implementation of efficient control strategies for recombinant protein synthesis in fed-batch E.coli cultivation processes. Based on experimental data, a kinetic dynamic model for cultivation process was developed. This model was used to determine substrate feeding strategies during the cultivation. The proposed feeding strategy consists of two phases – biomass growth phase and recombinant protein production phase. In the first process phase, substrate-limited process is recommended when the specific growth rate of biomass is about 90-95% of its maximum value. This ensures reduction of glucose concentration in the medium, improves process repeatability, reduces the development of secondary metabolites and other unwanted by-products. The substrate limitation can be enhanced to satisfy restriction on maximum oxygen transfer rate in the bioreactor and to guarantee necessary dissolved carbon dioxide concentration in culture media. In the recombinant protein production phase, the level of substrate limitation and specific growth rate are selected within the range to enable optimal target protein synthesis rate. To account for complex process dynamics, to efficiently exploit the oxygen transfer capability of the bioreactor, and to maintain the required dissolved oxygen concentration, adaptive control algorithms for dissolved oxygen control have been proposed. The developed model-based control strategies are useful in scale-up of cultivation processes and accelerate implementation of innovative biotechnological processes for industrial applications.Keywords: adaptive algorithms, model-based control, recombinant E. coli, scale-up of bioprocesses
Procedia PDF Downloads 2571130 In vitro Method to Evaluate the Effect of Steam-Flaking on the Quality of Common Cereal Grains
Authors: Wanbao Chen, Qianqian Yao, Zhenming Zhou
Abstract:
Whole grains with intact pericarp are largely resistant to digestion by ruminants because entire kernels are not conducive to bacterial attachment. But processing methods makes the starch more accessible to microbes, and increases the rate and extent of starch degradation in the rumen. To estimate the feasibility of applying a steam-flaking as the processing technique of grains for ruminants, cereal grains (maize, wheat, barley and sorghum) were processed by steam-flaking (steam temperature 105°C, heating time, 45 min). And chemical analysis, in vitro gas production, volatile fatty acid concentrations, and energetic values were adopted to evaluate the effects of steam-flaking. In vitro cultivation was conducted for 48h with the rumen fluid collected from steers fed a total mixed ration consisted of 40% hay and 60% concentrates. The results showed that steam-flaking processing had a significant effect on the contents of neutral detergent fiber and acid detergent fiber (P < 0.01). The concentration of starch gelatinization degree in all grains was also great improved in steam-flaking grains, as steam-flaking processing disintegrates the crystal structure of cereal starch, which may subsequently facilitate absorption of moisture and swelling. Theoretical maximum gas production after steam-flaking processing showed no great difference. However, compared with intact grains, total gas production at 48 h and the rate of gas production were significantly (P < 0.01) increased in all types of grain. Furthermore, there was no effect of steam-flaking processing on total volatile fatty acid, but a decrease in the ratio between acetate and propionate was observed in the current in vitro fermentation. The present study also found that steam-flaking processing increased (P < 0.05) organic matter digestibility and energy concentration of the grains. The collective findings of the present study suggest that steam-flaking processing of grains could improve their rumen fermentation and energy utilization by ruminants. In conclusion, the utilization of steam-flaking would be practical to improve the quality of common cereal grains.Keywords: cereal grains, gas production, in vitro rumen fermentation, steam-flaking processing
Procedia PDF Downloads 2721129 The Journalistic Representation of Femicide in Italy
Authors: Saveria Capecchi
Abstract:
In recent decades, the issue of gender-based violence, particularly femicide, has been increasingly presented to the public by Italian media. However, it is often treated in a trivialized and sensationalistic manner, focusing on cases that exhibit the most "attractive" elements (brutality, sex, drugs, the young age and/or good looks of the victims, stories with "mystery," "horror," etc.). Furthermore, this phenomenon is most often represented by referring to the psycho-individualistic paradigm, focusing on the psychological and individual characteristics of the perpetrator rather than referring to the feminist and/or constructivist paradigms. According to the latter, the causes of male violence against women do not lie in the individual problems of the perpetrator but in the social and cultural construction of the power hierarchy between men and women. The following study presents the results of qualitative research on the journalistic approach to male violence against women in Italy, aimed at examining the limitations of the narrative strategies used by the media. The research focuses on the case of Giulia Cecchettin (killed by her ex-boyfriend Filippo Turetta on November 11, 2023), which has fueled the debate on the narrative surrounding male violence against women. This case was chosen based on its significant media coverage and the victim's family's commitment to combating gender-based violence. The research involves a content analysis of 150 articles from four different national newspapers («Corriere della Sera», «La Stampa», «Il Giornale», «la Repubblica»). Additionally, the study analyzed the social media use of two Italian newspapers («Corriere della Sera» and «la Repubblica»), examining 20 posts and their 600 related comments, highlighting the various types of public responses, including criticisms of how femicide is represented by the media. Furthermore, the paper will reflect on the role that the Italian women's movement and certain journalist communities have played in promoting a narrative of femicide that is more attentive to power dynamics and free from gender stereotypes.Keywords: gender-based violence, femicide, gender stereotypes, Italian newspapers
Procedia PDF Downloads 24