Search results for: scientific literacy
448 Influence of CO₂ on the Curing of Permeable Concrete
Authors: A. M. Merino-Lechuga, A. González-Caro, D. Suescum-Morales, E. Fernández-Ledesma, J. R. Jiménez, J. M. Fernández-Rodriguez
Abstract:
Since the mid-19th century, the boom in the economy and industry has grown exponentially. This has led to an increase in pollution due to rising Greenhouse Gas (GHG) emissions and the accumulation of waste, leading to an increasingly imminent future scarcity of raw materials and natural resources. Carbon dioxide (CO₂) is one of the primary greenhouse gases, accounting for up to 55% of Greenhouse Gas (GHG) emissions. The manufacturing of construction materials generates approximately 73% of CO₂ emissions, with Portland cement production contributing to 41% of this figure. Hence, there is scientific and social alarm regarding the carbon footprint of construction materials and their influence on climate change. Carbonation of concrete is a natural process whereby CO₂ from the environment penetrates the material, primarily through pores and microcracks. Once inside, carbon dioxide reacts with calcium hydroxide (Ca(OH)2) and/or CSH, yielding calcium carbonates (CaCO3) and silica gel. Consequently, construction materials act as carbon sinks. This research investigated the effect of accelerated carbonation on the physical, mechanical, and chemical properties of two types of non-structural vibrated concrete pavers (conventional and draining) made from natural aggregates and two types of recycled aggregates from construction and demolition waste (CDW). Natural aggregates were replaced by recycled aggregates using a volumetric substitution method, and the CO₂ capture capacity was calculated. Two curing environments were utilized: a carbonation chamber with 5% CO₂ and a standard climatic chamber with atmospheric CO₂ concentration. Additionally, the effect of curing times of 1, 3, 7, 14, and 28 days on concrete properties was analyzed. Accelerated carbonation in-creased the apparent dry density, reduced water-accessible porosity, improved compressive strength, and decreased setting time to achieve greater mechanical strength. The maximum CO₂ capture ratio was achieved with the use of recycled concrete aggregate (52.52 kg/t) in the draining paver. Accelerated carbonation conditions led to a 525% increase in carbon capture compared to curing under atmospheric conditions. Accelerated carbonation of cement-based products containing recycled aggregates from construction and demolition waste is a promising technology for CO₂ capture and utilization, offering a means to mitigate the effects of climate change and promote the new paradigm of circular economy.Keywords: accelerated carbonation, CO₂ curing, CO₂ uptake and construction and demolition waste., circular economy
Procedia PDF Downloads 65447 High Strength, High Toughness Polyhydroxybutyrate-Co-Valerate Based Biocomposites
Authors: S. Z. A. Zaidi, A. Crosky
Abstract:
Biocomposites is a field that has gained much scientific attention due to the current substantial consumption of non-renewable resources and the environmentally harmful disposal methods required for traditional polymer composites. Research on natural fiber reinforced polyhydroxyalkanoates (PHAs) has gained considerable momentum over the past decade. There is little work on PHAs reinforced with unidirectional (UD) natural fibers and little work on using epoxidized natural rubber (ENR) as a toughening agent for PHA-based biocomposites. In this work, we prepared polyhydroxybutyrate-co-valerate (PHBV) biocomposites reinforced with UD 30 wt.% flax fibers and evaluated the use of ENR with 50% epoxidation (ENR50) as a toughening agent for PHBV biocomposites. Quasi-unidirectional flax/PHBV composites were prepared by hand layup, powder impregnation followed by compression molding. Toughening agents – polybutylene adiphate-co-terephthalate (PBAT) and ENR50 – were cryogenically ground into powder and mechanically mixed with main matrix PHBV to maintain the powder impregnation process. The tensile, flexural and impact properties of the biocomposites were measured and morphology of the composites examined using optical microscopy (OM) and scanning electron microscopy (SEM). The UD biocomposites showed exceptionally high mechanical properties as compared to the results obtained previously where only short fibers have been used. The improved tensile and flexural properties were attributed to the continuous nature of the fiber reinforcement and the increased proportion of fibers in the loading direction. The improved impact properties were attributed to a larger surface area for fiber-matrix debonding and for subsequent sliding and fiber pull-out mechanisms to act on, allowing more energy to be absorbed. Coating cryogenically ground ENR50 particles with PHBV powder successfully inhibits the self-healing nature of ENR-50, preventing particles from coalescing and overcoming problems in mechanical mixing, compounding and molding. Cryogenic grinding, followed by powder impregnation and subsequent compression molding is an effective route to the production of high-mechanical-property biocomposites based on renewable resources for high-obsolescence applications such as plastic casings for consumer electronics.Keywords: natural fibers, natural rubber, polyhydroxyalkanoates, unidirectional
Procedia PDF Downloads 289446 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 124445 A Mixed Finite Element Formulation for Functionally Graded Micro-Beam Resting on Two-Parameter Elastic Foundation
Authors: Cagri Mollamahmutoglu, Aykut Levent, Ali Mercan
Abstract:
Micro-beams are one of the most common components of Nano-Electromechanical Systems (NEMS) and Micro Electromechanical Systems (MEMS). For this reason, static bending, buckling, and free vibration analysis of micro-beams have been the subject of many studies. In addition, micro-beams restrained with elastic type foundations have been of particular interest. In the analysis of microstructures, closed-form solutions are proposed when available, but most of the time solutions are based on numerical methods due to the complex nature of the resulting differential equations. Thus, a robust and efficient solution method has great importance. In this study, a mixed finite element formulation is obtained for a functionally graded Timoshenko micro-beam resting on two-parameter elastic foundation. In the formulation modified couple stress theory is utilized for the micro-scale effects. The equation of motion and boundary conditions are derived according to Hamilton’s principle. A functional, derived through a scientific procedure based on Gateaux Differential, is proposed for the bending and buckling analysis which is equivalent to the governing equations and boundary conditions. Most important advantage of the formulation is that the mixed finite element formulation allows usage of C₀ type continuous shape functions. Thus shear-locking is avoided in a built-in manner. Also, element matrices are sparsely populated and can be easily calculated with closed-form integration. In this framework results concerning the effects of micro-scale length parameter, power-law parameter, aspect ratio and coefficients of partially or fully continuous elastic foundation over the static bending, buckling, and free vibration response of FG-micro-beam under various boundary conditions are presented and compared with existing literature. Performance characteristics of the presented formulation were evaluated concerning other numerical methods such as generalized differential quadrature method (GDQM). It is found that with less computational burden similar convergence characteristics were obtained. Moreover, formulation also includes a direct calculation of the micro-scale related contributions to the structural response as well.Keywords: micro-beam, functionally graded materials, two-paramater elastic foundation, mixed finite element method
Procedia PDF Downloads 160444 Refractory Cardiac Arrest: Do We Go beyond, Do We Increase the Organ Donation Pool or Both?
Authors: Ortega Ivan, De La Plaza Edurne
Abstract:
Background: Spain and other European countries have implemented Uncontrolled Donation after Cardiac Death (uDCD) programs. After 15 years of experience in Spain, many things have changed. Recent evidence and technical breakthroughs achieved in resuscitation are relevant for uDCD programs and raise some ethical concerns related to these protocols. Aim: To rethink current uDCD programs in the light of recent evidence on available therapeutic procedures applicable to victims of out-of-hospital cardiac arrest (OHCA). To address the following question: What is the current standard of treatment owed to victims of OHCA before including them in an uDCD protocol? Materials and Methods: Review of the scientific and ethical literature related to both uDCD programs and innovative resuscitation techniques. Results: 1) The standard of treatment received and the chances of survival of victims of OHCA depend on whether they are classified as Non-Heart Beating Patients (NHBP) or Non-Heart-Beating-Donors (NHBD). 2) Recent studies suggest that NHBPs are likely to survive, with good quality of life, if one or more of the following interventions are performed while ongoing CPR -guided by suspected or known cause of OHCA- is maintained: a) direct access to a Cath Lab-H24 or/and to extra-corporeal life support (ECLS); b) transfer in induced hypothermia from the Emergency Medical Service (EMS) to the ICU; c) thrombolysis treatment; d) mobile extra-corporeal membrane oxygenation (mini ECMO) instituted as a bridge to ICU ECLS devices. 3) Victims of OHCA who cannot benefit from any of these therapies should be considered as NHBDs. Conclusion: Current uDCD protocols do not take into account recent improvements in resuscitation and need to be adapted. Operational criteria to distinguish NHBDs from NHBP should seek a balance between the technical imperative (to do whatever is possible), considerations about expected survival with quality of life, and distributive justice (costs/benefits). Uncontrolled DCD protocols can be performed in a way that does not hamper the legitimate interests of patients, potential organ donors, their families, the organ recipients, and the health professionals involved in these processes. Families of NHBDs’ should receive information which conforms to the ethical principles of respect of autonomy and transparency.Keywords: uncontrolled donation after cardiac death resuscitation, refractory cardiac arrest, out of hospital cardiac, arrest ethics
Procedia PDF Downloads 237443 Effectiveness Assessment of a Brazilian Larvicide on Aedes Control
Authors: Josiane N. Muller, Allan K. R. Galardo, Tatiane A. Barbosa, Evan P. Ferro, Wellington M. Dos Santos, Ana Paula S. A. Correa, Edinaldo C. Rego, Jose B. P. Lima
Abstract:
The susceptibility status of an insect population to any larvicide depends on several factors such includes genetic constitution, environmental conditions and others. The mosquito Aedes aegypti is the primary vector of three important viral diseases, Zika, Dengue, and Chikungunya. The frequent outbreaks of those diseases in different parts of Brazil demonstrate the importance of testing the susceptibility of vectors in different environments. Since the control of this mosquito leads to the control of disease, alternatives for vector control that value the different Brazilian environmental conditions are needed for effective actions. The aim of this study was to evaluate a new commercial formulation of Bacillus thuringiensis israelenses (DengueTech: Brazilian innovative technology) in the Brazilian Legal Amazon considering the climate conditions. Semi-field tests were conducted in the Institute of Scientific and Technological Research of the State of Amapa in two different environments, one in a shaded area and the other exposed to sunlight. The mosquito larvae were exposed to larvicide concentration and a control; each group was tested in three containers of 40 liters each. To assess persistence 50 third instar larvae of Aedes aegypti laboratory lineages (Rockefeller) and 50 larvae of Aedes aegypti collected in the municipality of Macapa, Brazil’s Amapa state, were added weekly and after 24 hours the mortality was assessed. In total 16 tests were performed, where 12 were done with replacement of water (1/5 of the volume, three times per week). The effectiveness of the product was determined through mortality of ≥ 80%, as recommend by the World Health Organization. The results demonstrated that high-water temperatures (26-35 °C) on the containers influenced the residual time of the product, where the maximum effect achieved was 21 days in the shaded area; and no effectiveness of 60 days was found in any of the tests, as expected according to the larvicide company. The test with and without water replacement did not present significant differences in the mortality rate. Considering the different environments and climate, these results stimulate the need to test larvicide and its effectiveness in specific environmental settings in order to identify the parameters required for better results. Thus, we see the importance of semi-field researches considering the local climate conditions for a successful control of Aedes aegypti.Keywords: Aedes aegypti, bioassay, larvicida, vector control
Procedia PDF Downloads 129442 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine
Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez
Abstract:
An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.Keywords: blade, dynamic, fsi, wind turbine
Procedia PDF Downloads 482441 Interventional Radiology Perception among Medical Students
Authors: Shujon Mohammed Alazzam, Sarah Saad Alamer, Omar Hassan Kasule, Lama Suliman Aleid, Mohammad Abdulaziz Alakeel, Boshra Mosleh Alanazi, Abdullah Abdulelah Altowairqi, Yahya Ali Al-Asiri
Abstract:
Background: Interventional radiology (IR) is a specialized field within radiology that diagnose and treat several conditions through a minimally invasive surgical procedure that involves the use of various radiological techniques. In the last few years, the role of IR has expanded to include a variety of organ systems which have been led to an increase in demand for these Specialties. The level of knowledge regarding IR is relatively low in general. In this study, we aimed to investigate the perceptions of interventional radiology (IR) as a specialty among medical students and medical interns in Riyadh, Saudi Arabia. Methodology: This study was a cross section. The target population is medical students in January 2023 in Riyadh city, KSA. We used the questionnaire for face-to-face interviews with voluntary participants to assess their knowledge of Interventional radiology. Permission was taken from participants to use their information. Assuring them that the data in this study was used only for scientific purposes. Results: According to the inclusion criteria, a total of 314 students participated in the study. (49%) of the participants were in the preclinical years, and (51%) were in the clinical years. The findings indicate more than half of the students think that they had good information about IR (58%), while (42%) reported that they had poor information and knowledge about IR. Only (28%) of students were planning to take an elective and radiology rotation, (and 27%) said they would consider a career in IR. (73%) of the participants who would not consider a career in IR, the highest reasons in order were due to "I do not find it interesting" (45%), then "Radiation exposure" (14%). Around half (48%) thought that an IRs must complete a residency training program in both radiology and surgery, and just (36%) of the students believe that an IRs must finish training in radiology. Our data show the procedures performed by IRs that (66%) lower limb angioplasty and stenting (58%) Cardiac angioplasty or stenting. (68%) of the students were familiar with angioplasty. When asked about the source of exposure to angioplasty, the majority (46%) were from a cardiologist, (and 16%) were from the interventional radiologist. Regarding IR career prospects, (78%) of the students believe that IRs have good career prospects. In conclusion, our findings reveal that the perception and exposure to IR among medical students and interns are generally poor. This has a direct influence on the student's decision regarding IR as a career path. Recommendations to attract medical students and promote IR as a career should be increased knowledge among medical students and future physicians through early exposure to IR, and this will promote the specialty's growth; also, involvement of the Saudi Interventional Radiology Society and Radiological Society of Saudi Arabia is essential.Keywords: knowledge, medical students, perceptions, radiology, interventional radiology, Saudi Arabia
Procedia PDF Downloads 89440 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours
Authors: Fikret Yalcinkaya, Hamza Unsal
Abstract:
To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models
Procedia PDF Downloads 180439 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components
Authors: Najeh Lakhoua
Abstract:
Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture
Procedia PDF Downloads 204438 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis
Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi
Abstract:
The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH researchKeywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis
Procedia PDF Downloads 89437 Green and Cost-Effective Biofabrication of Copper Oxide Nanoparticles: Exploring Antimicrobial and Anticancer Applications
Authors: Yemane Tadesse Gebreslassie, Fisseha Guesh Gebremeskel
Abstract:
Nanotechnology has made remarkable advancements in recent years, revolutionizing various scientific fields, industries, and research institutions through the utilization of metal and metal oxide nanoparticles. Among these nanoparticles, copper oxide nanoparticles (CuO NPs) have garnered significant attention due to their versatile properties and wide-range applications, particularly, as effective antimicrobial and anticancer agents. CuO NPs can be synthesized using different methods, including physical, chemical, and biological approaches. However, conventional chemical and physical approaches are expensive, resource-intensive, and involve the use of hazardous chemicals, which can pose risks to human health and the environment. In contrast, biological synthesis provides a sustainable and cost-effective alternative by eliminating chemical pollutants and allowing for the production of CuO NPs of tailored sizes and shapes. This comprehensive review focused on the green synthesis of CuO NPs using various biological resources, such as plants, microorganisms, and other biological derivatives. Current knowledge and recent trends in green synthesis methods for CuO NPs are discussed, with a specific emphasis on their biomedical applications, particularly in combating cancer and microbial infections. This review highlights the significant potential of CuO NPs in addressing these diseases. By capitalizing on the advantages of biological synthesis, such as environmental safety and the ability to customize nanoparticle characteristics, CuO NPs have emerged as promising therapeutic agents for a wide range of conditions. This review presents compelling findings, demonstrating the remarkable achievements of biologically synthesized CuO NPs as therapeutic agents. Their unique properties and mechanisms enable effective combating against cancer cells and various harmful microbial infections. CuO NPs exhibit potent anticancer activity through diverse mechanisms, including induction of apoptosis, inhibition of angiogenesis, and modulation of signaling pathways. Additionally, their antimicrobial activity manifests through various mechanisms, such as disrupting microbial membranes, generating reactive oxygen species, and interfering with microbial enzymes. This review offers valuable insights into the substantial potential of biologically synthesized CuO NPs as an alternative approach for future therapeutic interventions against cancer and microbial infections.Keywords: biological synthesis, copper oxide nanoparticles, microbial infection, nanotechnology
Procedia PDF Downloads 62436 The Interaction of Climate Change and Human Health in Italy
Authors: Vito Telesca, Giuseppina A. Giorgio, M. Ragosta
Abstract:
The effects of extreme heat events are increasing in recent years. Humans are forced to adjust themselves to adverse climatic conditions. The impact of weather on human health has become public health significance, especially in light of climate change and rising frequency of devasting weather events (e.g., heat waves and floods). The interest of scientific community is widely known. In particular, the associations between temperature and mortality are well studied. Weather conditions are natural factors that affect the human organism. Recent works show that the temperature threshold at which an impact is seen varies by geographic area and season. These results suggest heat warning criteria should consider local thresholds to account for acclimation to local climatology as well as the seasonal timing of a forecasted heat wave. Therefore, it is very important the problem called ‘local warming’. This is preventable with adequate warning tools and effective emergency planning. Since climate change has the potential to increase the frequency of these types of events, improved heat warning systems are urgently needed. This would require a better knowledge of the full impact of extreme heat on morbidity and mortality. The majority of researchers who analyze the associations between human health and weather variables, investigate the effect of air temperature and bioclimatic indices. These indices combine air temperature, relative humidity, and wind speed and are very important to determine the human thermal comfort. Health impact studies of weather events showed that the prevention is an essential element to dramatically reduce the impact of heat waves. The summer Italian of 2012 was characterized with high average temperatures (con un +2.3°C in reference to the period 1971-2000), enough to be considered as the second hottest summer since 1800. Italy was the first among countries in Europe which adopted tools for to predict these phenomena with 72 hours in advance (Heat Health Watch Warning System - HHWWS). Furthermore, in Italy heat alert criteria relies on the different Indexes, for example Apparent temperature, Scharlau index, Thermohygrometric Index, etc. This study examines the importance of developing public health policies that protect the most vulnerable people (such as the elderly) to extreme temperatures, highlighting the factors that confer susceptibility.Keywords: heat waves, Italy, local warming, temperature
Procedia PDF Downloads 243435 Coastal Water Characteristics along the Saudi Arabian Coastline
Authors: Yasser O. Abualnaja1, Alexandra Pavlidou2, Taha Boksmati3, Ahmad Alharbi3, Hammad Alsulmi3, Saleh Omar Maghrabi3, Hassan Mowalad3, Rayan Mutwalli3, James H. Churchill4, Afroditi Androni2, Dionysios Ballas2, Ioannis Hatzianestis2, Harilaos Kontoyiannis2, Angeliki Konstantinopoulou2, Georgios Krokkos1, 5, Georgios Pappas2, Vassilis P. Papadopoulos2, Konstantinos Parinos2, Elvira Plakidi2, Eleni Rousselaki2, Dimitris Velaoras2, Panagiota Zachioti2, Theodore Zoulias2, Ibrahim Hoteit5.
Abstract:
The coastal areas along the Kingdom of Saudi Arabia on both the Red Sea and Arabian Gulf have been witnessing in the past decades an unprecedented economic growth and a rapid increase in anthropogenic activities. Therefore, the Saudi Arabian government has decided to frame a strategy for sustainable development of the coastal and marine environments, which comes in the context of the Vision 2030, aimed at providing the first comprehensive ‘Status Quo Assessment’ of the Kingdom’s coastal and marine environments. This strategy will serve as a baseline assessment for future monitoring activities; this baseline is relied on scientific evidence of the drivers, pressures, and their impact on the environments of the Red Sea and Arabian Gulf. A key element of the assessment was the cumulative pressures of the hotspots analysis, which was developed following the principles of the Driver-Pressure-State-Impact-Response (DPSIR) framework and using the cumulative pressure and impact assessment methodology. Ten hotspot sites were identified, eight in the Red Sea and two in the Arabian Gulf. Thus, multidisciplinary research cruises were conducted throughout the Red Sea and the Arabian Gulf coastal and marine environments in June/July 2021 and September 2021, respectively, in order to understand the relative impact of hydrography and the various pressures on the quality of seawater and sediments. The main objective was to record the physical and biogeochemical parameters along the coastal waters of the Kingdom, tracing the dispersion of contaminants related to specific pressures. The assessment revealed the effect of hydrography on the trophic status of the southern marine coastal areas of the Red Sea. Jeddah Lagoon system seems to face significant eutrophication and pollution challenges, whereas sediments are enriched in some heavy metals in many areas of the Red Sea and the Arabian Gulf. This multidisciplinary research in the Red Sea and the Arabian Gulf coastal waters will pave the way for future detailed environmental monitoring strategies for the Saudi Arabian marine environment.Keywords: arabian gulf, contaminants, hotspot, red sea
Procedia PDF Downloads 112434 The Grand Egyptian Museum as a Cultural Interface
Authors: Mahmoud Moawad Mohamed Osman
Abstract:
The Egyptian civilization was and still is an inspiration for many human civilizations and modern sciences. For this reason, there is still a passion for the ancient Egyptian civilization. Due to the breadth and abundance of the outputs of the ancient Egyptian civilization, many museums have been established that contribute to displaying and demonstrating the splendor of the ancient Egyptian civilization, and among those museums is the Grand Egyptian Museum (Egypt's gift to the whole world). The idea of establishing the Grand Egyptian Museum began in the nineties of the last century, and in 2002 the foundation stone was laid for the museum project to be built in a privileged location overlooking the eternal pyramids of Giza, where the Egyptian state was declared, and under the auspices of the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the International Union of Architects. , for an international architectural competition for the best design for the museum. The current design submitted by Heneghan Peng Architects in Ireland won, and its design was based on the rays of the sun extending from the tops of the three pyramids when they meet to represent a conical mass, which is the Grand Egyptian Museum. The construction of the museum project began in May 2005, when the site was paved and prepared, and in 2006, the largest antiquities restoration center in the Middle East was established, dedicated to the restoration, preservation, maintenance and rehabilitation of the antiquities scheduled to be displayed in the museum halls, which was opened in 2010. The construction of the museum building, which has an area of more than 300,000 square meters, was completed during the year 2021, and includes a number of exhibition halls, each of which is considered larger than many current museums in Egypt and the world. The museum is considered one of the most important and greatest achievements of modern Egypt. It was created to be an integrated global civilizational, cultural and entertainment edifice, and to be the first destination for everyone interested in ancient Egyptian heritage, as the largest museum in the world that tells the story of the history of ancient Egyptian civilization, as it contains a large number of distinctive and unique artifacts, including the treasures of the golden king Tutankhamun, which... It is displayed for the first time in its entirety since the discovery of his tomb in November 1922, in addition to the collection of Queen Hetepheres, the guard of the mother of King Khufu, the builder of the Great Pyramid in Giza, as well as the Museum of King Khufu’s Boats, as well as various archaeological collectibles from the pre-dynastic era until the Greek and Roman eras.Keywords: grand egyptian museum, egyptian civilization, education, museology
Procedia PDF Downloads 44433 Chemical, Physical and Microbiological Characteristics of a Texture-Modified Beef- Based 3D Printed Functional Product
Authors: Elvan G. Bulut, Betul Goksun, Tugba G. Gun, Ozge Sakiyan Demirkol, Kamuran Ayhan, Kezban Candogan
Abstract:
Dysphagia, difficulty in swallowing solid foods and thin liquids, is one of the common health threats among the elderly who require foods with modified texture in their diet. Although there are some commercial food formulations or hydrocolloids to thicken the liquid foods for dysphagic individuals, there is still a need for developing and offering new food products with enriched nutritional, textural and sensory characteristics to safely nourish these patients. 3D food printing is an appealing alternative in creating personalized foods for this purpose with attractive shape, soft and homogenous texture. In order to modify texture and prevent phase separation, hydrocolloids are generally used. In our laboratory, an optimized 3D printed beef-based formulation specifically for people with swallowing difficulties was developed based on the research project supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK Project # 218O017). The optimized formulation obtained from response surface methodology was 60% beef powder, 5.88% gelatin, and 0.74% kappa-carrageenan (all in a dry basis). This product was enriched with powders of freeze-dried beet, celery, and red capia pepper, butter, and whole milk. Proximate composition (moisture, fat, protein, and ash contents), pH value, CIE lightness (L*), redness (a*) and yellowness (b*), and color difference (ΔE*) values were determined. Counts of total mesophilic aerobic bacteria (TMAB), lactic acid bacteria (LAB), mold and yeast, total coliforms were conducted, and detection of coagulase positive S. aureus, E. coli, and Salmonella spp. were performed. The 3D printed products had 60.11% moisture, 16.51% fat, 13.68% protein, and 1.65% ash, and the pH value was 6.19, whereas the ΔE* value was 3.04. Counts of TMAB, LAB, mold and yeast and total coliforms before and after 3D printing were 5.23-5.41 log cfu/g, < 1 log cfu/g, < 1 log cfu/g, 2.39-2.15 log EMS/g, respectively. Coagulase positive S. aureus, E. coli, and Salmonella spp. were not detected in the products. The data obtained from this study based on determining some important product characteristics of functional beef-based formulation provides an encouraging basis for future research on the subject and should be useful in designing mass production of 3D printed products of similar composition.Keywords: beef, dysphagia, product characteristics, texture-modified foods, 3D food printing
Procedia PDF Downloads 111432 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions
Authors: Erva Akin
Abstract:
– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.Keywords: artificial intelligence, copyright, data governance, machine learning
Procedia PDF Downloads 83431 Foundations for Global Interactions: The Theoretical Underpinnings of Understanding Others
Authors: Randall E. Osborne
Abstract:
In a course on International Psychology, 8 theoretical perspectives (Critical Psychology, Liberation Psychology, Post-Modernism, Social Constructivism, Social Identity Theory, Social Reduction Theory, Symbolic Interactionism, and Vygotsky’s Sociocultural Theory) are used as a framework for getting students to understand the concept of and need for Globalization. One of critical psychology's main criticisms of conventional psychology is that it fails to consider or deliberately ignores the way power differences between social classes and groups can impact the mental and physical well-being of individuals or groups of people. Liberation psychology, also known as liberation social psychology or psicología social de la liberación, is an approach to psychological science that aims to understand the psychology of oppressed and impoverished communities by addressing the oppressive sociopolitical structure in which they exist. Postmodernism is largely a reaction to the assumed certainty of scientific, or objective, efforts to explain reality. It stems from a recognition that reality is not simply mirrored in human understanding of it, but rather, is constructed as the mind tries to understand its own particular and personal reality. Lev Vygotsky argued that all cognitive functions originate in, and must therefore be explained as products of social interactions and that learning was not simply the assimilation and accommodation of new knowledge by learners. Social Identity Theory discusses the implications of social identity for human interactions with and assumptions about other people. Social Identification Theory suggests people: (1) categorize—people find it helpful (humans might be perceived as having a need) to place people and objects into categories, (2) identify—people align themselves with groups and gain identity and self-esteem from it, and (3) compare—people compare self to others. Social reductionism argues that all behavior and experiences can be explained simply by the affect of groups on the individual. Symbolic interaction theory focuses attention on the way that people interact through symbols: words, gestures, rules, and roles. Meaning evolves from human their interactions in their environment and with people. Vygotsky’s sociocultural theory of human learning describes learning as a social process and the origination of human intelligence in society or culture. The major theme of Vygotsky’s theoretical framework is that social interaction plays a fundamental role in the development of cognition. This presentation will discuss how these theoretical perspectives are incorporated into a course on International Psychology, a course on the Politics of Hate, and a course on the Psychology of Prejudice, Discrimination and Hate to promote student thinking in a more ‘global’ manner.Keywords: globalization, international psychology, society and culture, teaching interculturally
Procedia PDF Downloads 252430 The Effect of Applying the Electronic Supply System on the Performance of the Supply Chain in Health Organizations
Authors: Sameh S. Namnqani, Yaqoob Y. Abobakar, Ahmed M. Alsewehri, Khaled M. AlQethami
Abstract:
The main objective of this research is to know the impact of the application of the electronic supply system on the performance of the supply department of health organizations. To reach this goal, the study adopted independent variables to measure the dependent variable (performance of the supply department), namely: integration with suppliers, integration with intermediaries and distributors and knowledge of supply size, inventory, and demand. The study used the descriptive method and was aided by the questionnaire tool that was distributed to a sample of workers in the Supply Chain Management Department of King Abdullah Medical City. After the statistical analysis, the results showed that: The 70 sample members strongly agree with the (electronic integration with suppliers) axis with a p-value of 0.001, especially with regard to the following: Opening formal and informal communication channels between management and suppliers (Mean 4.59) and exchanging information with suppliers with transparency and clarity (Mean 4.50). It also clarified that the sample members agree on the axis of (electronic integration with brokers and distributors) with a p-value of 0.001 and this is represented in the following elements: Exchange of information between management, brokers and distributors with transparency, clarity (Mean 4.18) , and finding a close cooperation relationship between management, brokers and distributors (Mean 4.13). The results also indicated that the respondents agreed to some extent on the axis (knowledge of the size of supply, stock, and demand) with a p-value of 0.001. It also indicated that the respondents strongly agree with the existence of a relationship between electronic procurement and (the performance of the procurement department in health organizations) with a p-value of 0.001, which is represented in the following: transparency and clarity in dealing with suppliers and intermediaries to prevent fraud and manipulation (Mean 4.50) and reduce the costs of supplying the needs of the health organization (Mean 4.50). From the results, the study recommended several recommendations, the most important of which are: that health organizations work to increase the level of information sharing between them and suppliers in order to achieve the implementation of electronic procurement in the supply management of health organizations. Attention to using electronic data interchange methods and using modern programs that make supply management able to exchange information with brokers and distributors to find out the volume of supply, inventory, and demand. To know the volume of supply, inventory, and demand, it recommended the application of scientific methods of supply for storage. Take advantage of information technology, for example, electronic data exchange techniques and documents, where it can help in contact with suppliers, brokers, and distributors, and know the volume of supply, inventory, and demand, which contributes to improving the performance of the supply department in health organizations.Keywords: healthcare supply chain, performance, electronic system, ERP
Procedia PDF Downloads 136429 A Novel Harmonic Compensation Algorithm for High Speed Drives
Authors: Lakdar Sadi-Haddad
Abstract:
The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.Keywords: active harmonic compensation, eddy current losses, high speed machine
Procedia PDF Downloads 395428 Indigenous Companies in Nigeria's Oil Sector: Stages, Opportunities, and Obstacles regarding Corporate Social Responsibility
Authors: L. U. Dumuje, R. Leite
Abstract:
There is an ongoing debate in terms of corporate social responsibility (CSR) initiative in Niger Delta, Nigeria, that originates from existing gap between stated objective of organizations in the Nigerian oil sector and their main activities that threaten the society. CSR in developing countries is becoming popular, and to contribute to scientific knowledge, we need to research on CSR practices and discourse in indigenous Nigeria that is scarce. Despite governments mandate in terms of unofficial blazing, methane gas is released into the air around refinery area which contributes to global warming. There is a need to understand if this practice applies to indigenous oil companies in Nigeria. To get a better understanding of CSR among indigenous oil companies in Nigeria, our study focuses on discourse and rhetoric regarding CSR. This current paper contributions is twofold: on the one hand, it aims to better understand practitioner’s rationale and fundamentals of CSR in Nigerian oil companies. On the other hand, it intends to identify the stages of CSR initiatives, advantages and difficulties of CSR implementation in indigenous Nigeria oil sector. This current paper uses the qualitative research as a methodological strategy. Instrument for data collection is semi-structured interview. Besides 28 interviews, we conduct five focus group discussions with stakeholders. Participant for this study consist of: employees, managers and executives of indigenous oil companies in Nigeria. It is relevant to mention, key informants as government institution, environmental organization and community leader/member are part of our sample. It is important that despite significant findings in some studies, there are still some gaps. To help filling this existing gaps, we have formulated some research questions, as follows: ‘What are the stages, opportunities and obstacles of having corporate social responsibility practice in indigenous oil companies in Nigeria’. This ongoing research sub-questions as follows: What are the CSR discourses and practices among indigenous companies in the Nigerian oil sector; what is the actual status regarding CSR development; what are the main perceptions of opportunities and obstacles with regard to CSR in indigenous Nigerian oil companies; who are the main stakeholders of indigenous Nigerian oil companies and their different meanings and understandings of CSR practices. Regarding the above questions, the following objectives have been determined: first, we conduct a literature review with the aim of understanding and identifying importance of CSR practises in western and developing countries. Second, this current paper identify specific characteristics of the national context in terms of CSR engagement in Nigeria, so we perform empirical research with relevant stakeholder in indigenous Nigerian, as well as key informants, in order to identify development of CSR and different perception of this praised initiative, CSR.Keywords: corporate social responsibility, indigenous, oil organizations, Nigeria, practice
Procedia PDF Downloads 137427 Impact of Pandemics on Cities and Societies
Authors: Deepak Jugran
Abstract:
Purpose: The purpose of this study is to identify how past Pandemics shaped social evolution and cities. Methodology: A historical and comparative analysis of major historical pandemics in human history their origin, transmission route, biological response and the aftereffects. A Comprehensive pre & post pandemic scenario and focuses selectively on major issues and pandemics that have deepest & lasting impact on society with available secondary data. Results: Past pandemics shaped the behavior of human societies and their cities and made them more resilient biologically, intellectually & socially endorsing the theory of “Survival of the fittest” by Sir Charles Darwin. Pandemics & Infectious diseases are here to stay and as a human society, we need to strengthen our collective response & preparedness besides evolving mechanisms for strict controls on inter-continental movements of people, & especially animals who become carriers for these viruses. Conclusion: Pandemics always resulted in great mortality, but they also improved the overall individual human immunology & collective social response; at the same time, they also improved the public health system of cities, health delivery systems, water, sewage distribution system, institutionalized various welfare reforms and overall collective social response by the societies. It made human beings more resilient biologically, intellectually, and socially hence endorsing the theory of “AGIL” by Prof Talcott Parsons. Pandemics & infectious diseases are here to stay and as humans, we need to strengthen our city response & preparedness besides evolving mechanisms for strict controls on inter-continental movements of people, especially animals who always acted as carriers for these novel viruses. Pandemics over the years acted like natural storms, mitigated the prevailing social imbalances and laid the foundation for scientific discoveries. We understand that post-Covid-19, institutionalized city, state and national mechanisms will get strengthened and the recommendations issued by the various expert groups which were ignored earlier will now be implemented for reliable anticipation, better preparedness & help to minimize the impact of Pandemics. Our analysis does not intend to present chronological findings of pandemics but rather focuses selectively on major pandemics in history, their causes and how they wiped out an entire city’s population and influenced the societies, their behavior and facilitated social evolution.Keywords: pandemics, Covid-19, social evolution, cities
Procedia PDF Downloads 112426 Ethnic-Racial Breakdown in Psychological Research among Latinx Populations in the U.S.
Authors: Madeline Phillips, Luis Mendez
Abstract:
The 21st century has seen an increase in the amount and variety of psychological research on Latinx, the largest minority group in the U.S., with great variability from the individual’s cultural origin (e.g., ethnicity) to region (e.g., nationality). We were interested in exploring how scientists recruit, conduct and report research on Latinx samples. Ethnicity and race are important components of individuals and should be addressed to capture a broader and deeper understanding of psychological research findings. In order to explore Latinx/Hispanic work, the Journal of Latinx Psychology (JLP) and Hispanic Journal of Behavioral Sciences (HJBS) were analyzed for 1) measures of ethnicity and race in empirical studies 2) nationalities represented 3) how researchers reported ethnic-racial demographics. The analysis included publications from 2013-2018 and revealed two common themes of reporting ethnicity and race: overrepresentation/underrepresentation and overgeneralization. There is currently not a systematic way of reporting ethnicity and race among Latinx/Hispanic research, creating a vague sense of what and how ethnicity/race plays a role in the lives of participants. Second, studies used the Hispanic/Latinx terms interchangeably and are not consistent across publications. For the purpose of this project, we were only interested in publications with Latinx samples in the U.S. Therefore, studies outside of the U.S. and non-empirical studies were excluded. JLP went from N = 118 articles to N = 94 and HJBS went from N = 174 to N = 154. For this project, we developed a coding rubric for ethnicity/race that reflected the different ways researchers reported ethnicity and race and was compatible with the U.S. census. We coded which ethnicity/race was identified as the largest ethnic group in each sample. We used the ethnic-racial breakdown numbers or percentages if provided. There were also studies that simply did not report the ethnic composition besides Hispanic or Latinx. We found that in 80% of the samples, Mexicans are overrepresented compared to the population statistics of Latinx in the US. We observed all the ethnic-racial breakdowns, demonstrating the overrepresentation of Mexican samples and underrepresentation and/or lack of representation of certain ethnicities (e.g., Chilean, Guatemalan). Our results showed an overgeneralization of studies that cluster their participants to Latinx/Hispanic, 23 for JLP and 63 for HJBS. The authors discuss the importance of transparency from researchers in reporting the context of the sample, including country, state, neighborhood, and demographic variables that are relevant to the goals of the project, except when there may be an issue of privacy and/or confidentiality involved. In addition, the authors discuss the importance to recognize the variability within the Latinx population and how it is reflected in the scientific discourse.Keywords: Latinx, Hispanic, race and ethnicity, diversity
Procedia PDF Downloads 114425 Men's Intimate Violence: Theory and Practice Relationship
Authors: Omer Zvi Shaked
Abstract:
Intimate Partner Violence (IPV) is a widespread social problem. Since the 1970's, and due to political changes resulting from the feminist movement, western society has been changing its attitude towards the phenomenon and has been taking an active approach to reduce its magnitude. Enterprises in the form of legislation, awareness and prevention campaigns, women's shelters, and community intervention programs became more prevalent as years progressed. Although many initiatives were found to be productive, the effectiveness of one, however, remained questionable throughout the years: intervention programs for men's intimate violence. Surveys outline two main intervention models for men's intimate violence. The first is the Duluth model, which argued that men are socialized to be dominant - while women are socialized to be subordinate - and men are therefore required by social imperative to enforce, physically if necessary, their dominance. The Duluth model became the chief authorized intervention program, and some states in the US even regulated it as the standard criminal justice program for men's intimate violence. However, meta-analysis findings demonstrated that based on a partner's reports, Duluth treatment completers have 44% recidivism rate, and between 40% and 85% dropout range. The second model is the Cognitive-Behavioral Model (CBT), which is a highly accepted intervention worldwide. The model argues that cognitive misrepresentations of intimate situations precede violent behaviors frequently when anger predisposition exists. Since anger dysregulation mediates between one's cognitive schemes and violent response, anger regulation became the chief purpose of the intervention. Yet, a meta-analysis found only a 56% risk reduction for CBT interventions. It is, therefore, crucial to understand the background behind the domination of both the Duluth model and CBT interventions. This presentation will discuss the ways in which theoretical conceptualizations of men's intimate violence, as well as ideologies, had contributed to the above-mentioned interventions' wide acceptance, despite known lack of scientific and evidential support. First, the presentation will review the prominent interventions for male intimate violence, the Duluth model, and CBT. Second, the presentation will review the prominent theoretical models explaining men's intimate violence: The Patriarchal model, the Abusive Personality model, and the Post-Traumatic Stress model. Third, the presentation will discuss the interrelation between theory and practice, and the nature of affinity between research and practice regarding men's intimate violence. Finally, the presentation will set new directions for further research, aiming to improve intervention's efficiency with men's intimate violence and advance social work practice in the field.Keywords: intimate partner violence, theory and practice relationship, Duluth, CBT, abusive personality, post-traumatic stress
Procedia PDF Downloads 126424 An Econometric Analysis of the Flat Tax Revolution
Authors: Wayne Tarrant, Ethan Petersen
Abstract:
The concept of a flat tax goes back to at least the Biblical tithe. A progressive income tax was first vociferously espoused in a small, but famous, pamphlet in 1848 (although England had an emergency progressive tax for war costs prior to this). Within a few years many countries had adopted the progressive structure. The flat tax was only reinstated in some small countries and British protectorates until Mart Laar was elected Prime Minister of Estonia in 1992. Since Estonia’s adoption of the flat tax in 1993, many other formerly Communist countries have likewise abandoned progressive income taxes. Economists had expectations of what would happen when a flat tax was enacted, but very little work has been done on actually measuring the effect. With a testbed of 21 countries in this region that currently have a flat tax, much comparison is possible. Several countries have retained progressive taxes, giving an opportunity for contrast. There are also the cases of Czech Republic and Slovakia, which have adopted and later abandoned the flat tax. Further, with over 20 years’ worth of economic history in some flat tax countries, we can begin to do some serious longitudinal study. In this paper we consider many economic variables to determine if there are statistically significant differences from before to after the adoption of a flat tax. We consider unemployment rates, tax receipts, GDP growth, Gini coefficients, and market data where the data are available. Comparisons are made through the use of event studies and time series methods. The results are mixed, but we draw statistically significant conclusions about some effects. We also look at the different implementations of the flat tax. In some countries there are equal income and corporate tax rates. In others the income tax has a lower rate, while in others the reverse is true. Each of these sends a clear message to individuals and corporations. The policy makers surely have a desired effect in mind. We group countries with similar policies, try to determine if the intended effect actually occurred, and then report the results. This is a work in progress, and we welcome the suggestion of variables to consider. Further, some of the data from before the fall of the Iron Curtain are suspect. Since there are new ruling regimes in these countries, the methods of computing different statistical measures has changed. Although we first look at the raw data as reported, we also attempt to account for these changes. We show which data seem to be fictional and suggest ways to infer the needed statistics from other data. These results are reported beside those on the reported data. Since there is debate about taxation structure, this paper can help inform policymakers of change the flat tax has caused in other countries. The work shows some strengths and weaknesses of a flat tax structure. Moreover, it provides beginnings of a scientific analysis of the flat tax in practice rather than having discussion based solely upon theory and conjecture.Keywords: flat tax, financial markets, GDP, unemployment rate, Gini coefficient
Procedia PDF Downloads 339423 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 153422 Investigation of the Effects of 10-Week Nordic Hamstring Exercise Training and Subsequent Detraining on Plasma Viscosity and Oxidative Stress Levels in Healthy Young Men
Authors: H. C. Ozdamar , O. Kilic-Erkek, H. E. Akkaya, E. Kilic-Toprak, M. Bor-Kucukatay
Abstract:
Nordic hamstring exercise (NHE) is used to increase hamstring muscle strength, prevent injuries. The aim of this study was to reveal the acute, long-term effects of 10-week NHE, followed by 5, 10-week detraining on anthropometric measurements, flexibility, anaerobic power, muscle architecture, damage, fatigue, oxidative stress, plasma viscosity (PV), blood lactate levels. 40 sedentary, healthy male volunteers underwent 10 weeks of progressive NHE followed by 5, 10 weeks of detraining. Muscle architecture was determined by ultrasonography, stiffness by strain elastography. Anaerobic power was assessed by double-foot standing, long jump, vertical jump, flexibility by sit-lie, hamstring flexibility tests. Creatine kinase activity, oxidant/antioxidant parameters were measured from venous blood by a commercial kit, whereas PV was determined using a cone-plate viscometer. The blood lactate level was measured from the fingertip. NHE allowed subjects to lose weight, this effect was reversed by detraining for 5 weeks. Exercise caused an increase in knee angles measured by a goniometer, which wasn’t affected by detraining. 10-week NHE caused a partially reversed increase in anaerobic performance upon detraining. NHE resulted in increment of biceps femoris long head (BFub) area, pennation angle, which was reversed by detraining of 10-weeks. Blood lactate levels, muscle pain, fatigue were increased after each exercise session. NHE didn’t change oxidant/antioxidant parameters; 5-week detraining resulted in an increase in total oxidant capacity (TOC) and oxidative stress index (OSI). Detraining of 10 weeks caused a reduction of these parameters. Acute exercise caused a reduction in PV at 1 to 10 weeks. Pre-exercise PV measured on the 10th week was lower than the basal value. Detraining caused the increment of PV. The results may guide the selection of the exercise type to increase performance and muscle strength. Knowing how much of the gains will be lost after a period of detraining can contribute to raising awareness of the continuity of the exercise. This work was supported by PAU Scientific Research Projects Coordination Unit (Project number: 2018SABE034)Keywords: anaerobic power, detraining, Nordic hamstring exercise, oxidative stress, plasma viscosity
Procedia PDF Downloads 126421 Economic Impact of Drought on Agricultural Society: Evidence Based on a Village Study in Maharashtra, India
Authors: Harshan Tee Pee
Abstract:
Climate elements include surface temperatures, rainfall patterns, humidity, type and amount of cloudiness, air pressure and wind speed and direction. Change in one element can have an impact on the regional climate. The scientific predictions indicate that global climate change will increase the number of extreme events, leading to more frequent natural hazards. Global warming is likely to intensify the risk of drought in certain parts and also leading to increased rainfall in some other parts. Drought is a slow advancing disaster and creeping phenomenon– which accumulate slowly over a long period of time. Droughts are naturally linked with aridity. But droughts occur over most parts of the world (both wet and humid regions) and create severe impacts on agriculture, basic household welfare and ecosystems. Drought condition occurs at least every three years in India. India is one among the most vulnerable drought prone countries in the world. The economic impacts resulting from extreme environmental events and disasters are huge as a result of disruption in many economic activities. The focus of this paper is to develop a comprehensive understanding about the distributional impacts of disaster, especially impact of drought on agricultural production and income through a panel study (drought year and one year after the drought) in Raikhel village, Maharashtra, India. The major findings of the study indicate that cultivating area as well as the number of cultivating households reduced after the drought, indicating a shift in the livelihood- households moved from agriculture to non-agriculture. Decline in the gross cropped area and production of various crops depended on the negative income from these crops in the previous agriculture season. All the landholding categories of households except landlords had negative income in the drought year and also the income disparities between the households were higher in that year. In the drought year, the cost of cultivation was higher for all the landholding categories due to the increased cost for irrigation and input cost. In the drought year, agriculture products (50 per cent of the total products) were used for household consumption rather than selling in the market. It is evident from the study that livelihood which was based on natural resources became less attractive to the people to due to the risk involved in it and people were moving to less risk livelihood for their sustenance.Keywords: climate change, drought, agriculture economics, disaster impact
Procedia PDF Downloads 118420 Formulation and Test of a Model to explain the Complexity of Road Accident Events in South Africa
Authors: Dimakatso Machetele, Kowiyou Yessoufou
Abstract:
Whilst several studies indicated that road accident events might be more complex than thought, we have a limited scientific understanding of this complexity in South Africa. The present project proposes and tests a more comprehensive metamodel that integrates multiple causality relationships among variables previously linked to road accidents. This was done by fitting a structural equation model (SEM) to the data collected from various sources. The study also fitted the GARCH Model (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict the future of road accidents in the country. The analysis shows that the number of road accidents has been increasing since 1935. The road fatality rate follows a polynomial shape following the equation: y = -0.0114x²+1.2378x-2.2627 (R²=0.76) with y = death rate and x = year. This trend results in an average death rate of 23.14 deaths per 100,000 people. Furthermore, the analysis shows that the number of crashes could be significantly explained by the total number of vehicles (P < 0.001), number of registered vehicles (P < 0.001), number of unregistered vehicles (P = 0.003) and the population of the country (P < 0.001). As opposed to expectation, the number of driver licenses issued and total distance traveled by vehicles do not correlate significantly with the number of crashes (P > 0.05). Furthermore, the analysis reveals that the number of casualties could be linked significantly to the number of registered vehicles (P < 0.001) and total distance traveled by vehicles (P = 0.03). As for the number of fatal crashes, the analysis reveals that the total number of vehicles (P < 0.001), number of registered (P < 0.001) and unregistered vehicles (P < 0.001), the population of the country (P < 0.001) and the total distance traveled by vehicles (P < 0.001) correlate significantly with the number of fatal crashes. However, the number of casualties and again the number of driver licenses do not seem to determine the number of fatal crashes (P > 0.05). Finally, the number of crashes is predicted to be roughly constant overtime at 617,253 accidents for the next 10 years, with the worse scenario suggesting that this number may reach 1 896 667. The number of casualties was also predicted to be roughly constant at 93 531 overtime, although this number may reach 661 531 in the worst-case scenario. However, although the number of fatal crashes may decrease over time, it is forecasted to reach 11 241 fatal crashes within the next 10 years, with the worse scenario estimated at 19 034 within the same period. Finally, the number of fatalities is also predicted to be roughly constant at 14 739 but may also reach 172 784 in the worse scenario. Overall, the present study reveals the complexity of road accidents and allows us to propose several recommendations aimed to reduce the trend of road accidents, casualties, fatal crashes, and death in South Africa.Keywords: road accidents, South Africa, statistical modelling, trends
Procedia PDF Downloads 161419 Reflective Thinking and Experiential Learning – A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities, Greater Integration of Student Profiles
Authors: Paulo Sérgio Ribeiro de Araújo Bogas
Abstract:
Although several studies have assumed (at least implicitly) that learners' approaches to learning develop into deeper approaches to higher education, there appears to be no clear theoretical basis for this assumption and no empirical evidence. As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation, and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences result from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the student's responses can be described as students who reinforce the initial deep approach, students who maintain the initial deep approach level, and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to the possible adoption of deep approaches to learning since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself, and, on the other hand, the additional effort that this practice required for some of the students.Keywords: experiential learning, higher education, mixed methods, reflective learning, marketing
Procedia PDF Downloads 83