Search results for: 5000 series and 6000 series Al alloys
402 Numerical Simulation on Deformation Behaviour of Additively Manufactured AlSi10Mg Alloy
Authors: Racholsan Raj Nirmal, B. S. V. Patnaik, R. Jayaganthan
Abstract:
The deformation behaviour of additively manufactured AlSi10Mg alloy under low strains, high strain rates and elevated temperature conditions is essential to analyse and predict its response against dynamic loading such as impact and thermomechanical fatigue. The constitutive relation of Johnson-Cook is used to capture the strain rate sensitivity and thermal softening effect in AlSi10Mg alloy. Johnson-Cook failure model is widely used for exploring damage mechanics and predicting the fracture in many materials. In this present work, Johnson-Cook material and damage model parameters for additively manufactured AlSi10Mg alloy have been determined numerically from four types of uniaxial tensile test. Three different uniaxial tensile tests with dynamic strain rates (0.1, 1, 10, 50, and 100 s-1) and elevated temperature tensile test with three different temperature conditions (450 K, 500 K and 550 K) were performed on 3D printed AlSi10Mg alloy in ABAQUS/Explicit. Hexahedral elements are used to discretize tensile specimens and fracture energy value of 43.6 kN/m was used for damage initiation. Levenberg Marquardt optimization method was used for the evaluation of Johnson-Cook model parameters. It was observed that additively manufactured AlSi10Mg alloy has shown relatively higher strain rate sensitivity and lower thermal stability as compared to the other Al alloys.Keywords: ABAQUS, additive manufacturing, AlSi10Mg, Johnson-Cook model
Procedia PDF Downloads 170401 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies
Authors: Pragati Sirohi, Vivek Singh Rana
Abstract:
Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.Keywords: brand value, FMCG, market capitalization, net worth
Procedia PDF Downloads 358400 Solutions to Reduce CO2 Emissions in Autonomous Robotics
Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu
Abstract:
Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy
Procedia PDF Downloads 422399 Damage-Based Seismic Design and Evaluation of Reinforced Concrete Bridges
Authors: Ping-Hsiung Wang, Kuo-Chun Chang
Abstract:
There has been a common trend worldwide in the seismic design and evaluation of bridges towards the performance-based method where the lateral displacement or the displacement ductility of bridge column is regarded as an important indicator for performance assessment. However, the seismic response of a bridge to an earthquake is a combined result of cyclic displacements and accumulated energy dissipation, causing damage to the bridge, and hence the lateral displacement (ductility) alone is insufficient to tell its actual seismic performance. This study aims to propose a damage-based seismic design and evaluation method for reinforced concrete bridges on the basis of the newly developed capacity-based inelastic displacement spectra. The capacity-based inelastic displacement spectra that comprise an inelastic displacement ratio spectrum and a corresponding damage state spectrum was constructed by using a series of nonlinear time history analyses and a versatile, smooth hysteresis model. The smooth model could take into account the effects of various design parameters of RC bridge columns and correlates the column’s strength deterioration with the Park and Ang’s damage index. It was proved that the damage index not only can be used to accurately predict the onset of strength deterioration, but also can be a good indicator for assessing the actual visible damage condition of column regardless of its loading history (i.e., similar damage index corresponds to similar actual damage condition for the same designed columns subjected to very different cyclic loading protocols as well as earthquake loading), providing a better insight into the seismic performance of bridges. Besides, the computed spectra show that the inelastic displacement ratio for far-field ground motions approximately conforms to the equal displacement rule when structural period is larger than around 0.8 s, but that for near-fault ground motions departs from the rule in the whole considered spectral regions. Furthermore, the near-fault ground motions would lead to significantly greater inelastic displacement ratio and damage index than far-field ground motions and most of the practical design scenarios cannot survive the considered near-fault ground motion when the strength reduction factor of bridge is not less than 5.0. Finally, the spectrum formula is presented as a function of structural period, strength reduction factor, and various column design parameters for far-field and near-fault ground motions by means of the regression analysis of the computed spectra. And based on the developed spectrum formula, a design example of a bridge is presented to illustrate the proposed damage-based seismic design and evaluation method where the damage state of the bridge is used as the performance objective.Keywords: damage index, far-field, near-fault, reinforced concrete bridge, seismic design and evaluation
Procedia PDF Downloads 125398 Sociology Perspective on Emotional Maltreatment: Retrospective Case Study in a Japanese Elementary School
Authors: Nozomi Fujisaka
Abstract:
This sociological case study analyzes a sequence of student maltreatment in an elementary school in Japan, based on narratives from former students. Among various forms of student maltreatment, emotional maltreatment has received less attention. One reason for this is that emotional maltreatment is often considered part of education and is difficult to capture in surveys. To discuss the challenge of recognizing emotional maltreatment, it's necessary to consider the social background in which student maltreatment occurs. Therefore, from the perspective of the sociology of education, this study aims to clarify the process through which emotional maltreatment was embraced by students within a Japanese classroom. The focus of this study is a series of educational interactions by a homeroom teacher with 11- or 12-year-old students at a small public elementary school approximately 10 years ago. The research employs retrospective narrative data collected through interviews and autoethnography. The semi-structured interviews, lasting one to three hours each, were conducted with 11 young people who were enrolled in the same class as the researcher during their time in elementary school. Autoethnography, as a critical research method, contributes to existing theories and studies by providing a critical representation of the researcher's own experiences. Autoethnography enables researchers to collect detailed data that is often difficult to verbalize in interviews. These research methods are well-suited for this study, which aims to shift the focus from teachers' educational intentions to students' perspectives and gain a deeper understanding of student maltreatment. The research results imply a pattern of emotional maltreatment that is challenging to differentiate from education. In this study's case, the teacher displayed calm and kind behavior toward students after a threat and an explosion of anger. Former students frequently mentioned this behavior of the teacher and perceived emotional maltreatment as part of education. It was not uncommon for former students to offer positive evaluations of the teacher despite experiencing emotional distress. These findings are analyzed and discussed in conjunction with the deschooling theory and the cycle of violence theory. The deschooling theory provides a sociological explanation for how emotional maltreatment can be overlooked in society. The cycle of violence theory, originally developed within the context of domestic violence, explains how violence between romantic partners can be tolerated due to prevailing social norms. Analyzing the case in association with these two theories highlights the characteristics of teachers' behaviors that rationalize maltreatment as education and hinder students from escaping emotional maltreatment. This study deepens our understanding of the causes of student maltreatment and provides a new perspective for future qualitative and quantitative research. Furthermore, since this research is based on the sociology of education, it has the potential to expand research in the fields of pedagogy and sociology, in addition to psychology and social welfare.Keywords: emotional maltreatment, education, student maltreatment, Japan
Procedia PDF Downloads 85397 Study on the Prediction of Serviceability of Garments Based on the Seam Efficiency and Selection of the Right Seam to Ensure Better Serviceability of Garments
Authors: Md Azizul Islam
Abstract:
Seam is the line of joining two separate fabric layers for functional or aesthetic purposes. Different kinds of seams are used for assembling the different areas or parts of the garment to increase serviceability. To empirically support the importance of seam efficiency on serviceability of garments, this study is focused on choosing the right type of seams for particular sewing parts of the garments based on the seam efficiency to ensure better serviceability. Seam efficiency is the ratio of seam strength and fabric strength. Single jersey knitted finished fabrics of four different GSMs (gram per square meter) were used to make the test garments T-shirt. Three distinct types of the seam: superimposed, lapped and flat seam was applied to the side seams of T-shirt and sewn by lockstitch (stitch class- 301) in a flat-bed plain sewing machine (maximum sewing speed: 5000 rpm) to make (3x4) 12 T-shirts. For experimental purposes, needle thread count (50/3 Ne), bobbin thread count (50/2 Ne) and the stitch density (stitch per inch: 8-9), Needle size (16 in singer system), stitch length (31 cm), and seam allowance (2.5cm) were kept same for all specimens. The grab test (ASTM D5034-08) was done in the Universal tensile tester to measure the seam strength and fabric strength. The produced T-shirts were given to 12 soccer players who wore the shirts for 20 soccer matches (each match of 90 minutes duration). Serviceability of the shirt were measured by visual inspection of a 5 points scale based on the seam conditions. The study found that T-shirts produced with lapped seam show better serviceability and T-shirts made of flat seams perform the lowest score in serviceability score. From the calculated seam efficiency (seam strength/ fabric strength), it was obvious that the performance (in terms of strength) of the lapped and bound seam is higher than that of the superimposed seam and the performance of superimposed seam is far better than that of the flat seam. So it can be predicted that to get a garment of high serviceability, lapped seams could be used instead of superimposed or other types of the seam. In addition, less stressed garments can be assembled by others seems like superimposed seams or flat seams.Keywords: seam, seam efficiency, serviceability, T-shirt
Procedia PDF Downloads 203396 A Detailed Experimental Study and Evaluation of Springback under Stretch Bending Process
Authors: A. Soualem
Abstract:
The design of multi stage deep drawing processes requires the evaluation of many process parameters such as the intermediate die geometry, the blank shape, the sheet thickness, the blank holder force, friction, lubrication etc..These process parameters have to be determined for the optimum forming conditions before the process design. In general sheet metal forming may involve stretching drawing or various combinations of these basic modes of deformation. It is important to determine the influence of the process variables in the design of sheet metal working process. Especially, the punch and die corner for deep drawing will affect the formability. At the same time the prediction of sheet metals springback after deep drawing is an important issue to solve for the control of manufacturing processes. Nowadays, the importance of this problem increases because of the use of steel sheeting with high stress and also aluminum alloys. The aim of this paper is to give a better understanding of the springback and its effect in various sheet metals forming process such as expansion and restraint deep drawing in the cup drawing process, by varying radius die, lubricant for two commercially available materials e.g. galvanized steel and Aluminum sheet. To achieve these goals experiments were carried out and compared with other results. The original of our purpose consist on tests which are ensured by adapting a U-type stretching-bending device on a tensile testing machine, where we studied and quantified the variation of the springback.Keywords: springback, deep drawing, expansion, restricted deep drawing
Procedia PDF Downloads 455395 Measuring Emotion Dynamics on Facebook: Associations between Variability in Expressed Emotion and Psychological Functioning
Authors: Elizabeth M. Seabrook, Nikki S. Rickard
Abstract:
Examining time-dependent measures of emotion such as variability, instability, and inertia, provide critical and complementary insights into mental health status. Observing changes in the pattern of emotional expression over time could act as a tool to identify meaningful shifts between psychological well- and ill-being. From a practical standpoint, however, examining emotion dynamics day-to-day is likely to be burdensome and invasive. Utilizing social media data as a facet of lived experience can provide real-world, temporally specific access to emotional expression. Emotional language on social media may provide accurate and sensitive insights into individual and community mental health and well-being, particularly with focus placed on the within-person dynamics of online emotion expression. The objective of the current study was to examine the dynamics of emotional expression on the social network platform Facebook for active users and their relationship with psychological well- and ill-being. It was expected that greater positive and negative emotion variability, instability, and inertia would be associated with poorer psychological well-being and greater depression symptoms. Data were collected using a smartphone app, MoodPrism, which delivered demographic questionnaires, psychological inventories assessing depression symptoms and psychological well-being, and collected the Status Updates of consenting participants. MoodPrism also delivered an experience sampling methodology where participants completed items assessing positive affect, negative affect, and arousal, daily for a 30-day period. The number of positive and negative words in posts was extracted and automatically collated by MoodPrism. The relative proportion of positive and negative words from the total words written in posts was then calculated. Preliminary analyses have been conducted with the data of 9 participants. While these analyses are underpowered due to sample size, they have revealed trends that greater variability in the emotion valence expressed in posts is positively associated with greater depression symptoms (r(9) = .56, p = .12), as is greater instability in emotion valence (r(9) = .58, p = .099). Full data analysis utilizing time-series techniques to explore the Facebook data set will be presented at the conference. Identifying the features of emotion dynamics (variability, instability, inertia) that are relevant to mental health in social media emotional expression is a fundamental step in creating automated screening tools for mental health that are temporally sensitive, unobtrusive, and accurate. The current findings show how monitoring basic social network characteristics over time can provide greater depth in predicting risk and changes in depression and positive well-being.Keywords: emotion, experience sampling methods, mental health, social media
Procedia PDF Downloads 251394 Capital Accumulation and Unemployment in Namibia, Nigeria and South Africa
Authors: Abubakar Dikko
Abstract:
The research investigates the causes of unemployment in Namibia, Nigeria and South Africa, and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria and South Africa are great African nations battling with high unemployment rates. In 2013, the countries recorded high unemployment rates of 16.9%, 23.9% and 24.9% respectively. Most of the unemployed in these economies comprises of youth. Roughly about 40% working age South Africans has jobs, whereas in Nigeria and Namibia is less than that. Unemployment in Africa has wide implications on households which has led to extensive poverty and inequality, and created a rampant criminality. Recently in South Africa there has been a case of xenophobic attacks which were caused by the citizens of the country as a result of unemployment. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes that there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). The African countries with low level of capital accumulation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.Keywords: capital accumulation, unemployment, NAIRU, Post-Keynesian economics
Procedia PDF Downloads 265393 Option Pricing Theory Applied to the Service Sector
Authors: Luke Miller
Abstract:
This paper develops an options pricing methodology to value strategic pricing strategies in the services sector. More specifically, this study provides a unifying taxonomy of current service sector pricing practices, frames these pricing decisions as strategic real options, demonstrates accepted option valuation techniques to assess service sector pricing decisions, and suggests future research areas where pricing decisions and real options overlap. Enhancing revenue in the service sector requires proactive decision making in a world of uncertainty. In an effort to strategically price service products, revenue enhancement necessitates a careful study of the service costs, customer base, competition, legalities, and shared economies with the market. Pricing decisions involve the quality of inputs, manpower, and best practices to maintain superior service. These decisions further hinge on identifying relevant pricing strategies and understanding how these strategies impact a firm’s value. A relatively new area of research applies option pricing theory to investments in real assets and is commonly known as real options. The real options approach is based on the premise that many corporate decisions to invest or divest in assets are simply an option wherein the firm has the right to make an investment without any obligation to act. The decision maker, therefore, has more flexibility and the value of this operating flexibility should be taken into consideration. The real options framework has already been applied to numerous areas including manufacturing, inventory, natural resources, research and development, strategic decisions, technology, and stock valuation. Additionally, numerous surveys have identified a growing need for the real options decision framework within all areas of corporate decision-making. Despite the wide applicability of real options, no study has been carried out linking service sector pricing decisions and real options. This is surprising given the service sector comprises 80% of the US employment and Gross Domestic Product (GDP). Identifying real options as a practical tool to value different service sector pricing strategies is believed to have a significant impact on firm decisions. This paper identifies and discusses four distinct pricing strategies available to the service sector from an options’ perspective: (1) Cost-based profit margin, (2) Increased customer base, (3) Platform pricing, and (4) Buffet pricing. Within each strategy lie several pricing tactics available to the service firm. These tactics can be viewed as options the decision maker has to best manage a strategic position in the market. To demonstrate the effectiveness of including flexibility in the pricing decision, a series of pricing strategies were developed and valued using a real options binomial lattice structure. The options pricing approach discussed in this study allows service firms to directly incorporate market-driven perspectives into the decision process and thus synchronizing service operations with organizational economic goals.Keywords: option pricing theory, real options, service sector, valuation
Procedia PDF Downloads 356392 Application of Shape Memory Alloy as Shear Connector in Composite Bridges: Overview of State-of-the-Art
Authors: Apurwa Rastogi, Anant Parghi
Abstract:
Shape memory alloys (SMAs) are memory metals with a high calibre to outperform as a civil construction material. They showcase novel functionality of undergoing large deformations and self-healing capability (pseudoelasticity) that leads to its emerging applications in a variety of areas. In the existing literature, most of the studies focused on the behaviour of SMA when used in critical regions of the smart buildings/bridges designed to withstand severe earthquakes without collapse and also its various applications in retrofitting works. However, despite having high ductility, their uses as construction joints and shear connectors in composite bridges are still unexplored in the research domain. This article presents to gain a broad outlook on whether SMAs can be partially used as shear connectors in composite bridges. In this regard, existing papers on the characteristics of shear connectors in the composite bridges will be discussed thoroughly and matched with the fundamental characteristics and properties of SMA. Since due to the high strength, stiffness, and ductility phenomena of SMAs, it is expected to be a good material for the shear connectors in composite bridges, and the collected evidence encourages the prior scrutiny of its partial use in the composite constructions. Based on the comprehensive review, important and necessary conclusions will be affirmed, and further emergence of research direction on the use of SMA will be discussed. This opens the window of new possibilities of using smart materials to enhance the performance of bridges even more in the near future.Keywords: composite bridges, ductility, pseudoelasticity, shape memory alloy, shear connectors
Procedia PDF Downloads 190391 Effect of Three Desensitizers on Dentinal Tubule Occlusion and Bond Strength of Dentin Adhesives
Authors: Zou Xuan, Liu Hongchen
Abstract:
The ideal dentin desensitizing agent should not only have good biological safety, simple clinical operation mode, the superior treatment effect, but also should have a durable effect to resist the oral environmental temperature change and oral mechanical abrasion, so as to achieve a persistent desensitization effect. Also, when using desensitizing agent to prevent the post-operative hypersensitivity, we should not only prevent it from affecting crowns’ retention, but must understand its effects on bond strength of dentin adhesives. There are various of desensitizers and dentin adhesives in clinical treatment. They have different chemical or physical properties. Whether the use of desensitizing agent would affect the bond strength of dentin adhesives still need further research. In this in vitro study, we built the hypersensitive dentin model and post-operative dentin model, to evaluate the sealing effects and durability on exposed tubule by three different dentin desensitizers and to evaluate the sealing effects and the bond strength of dentin adhesives after using three different dentin desensitizers on post-operative dentin. The result of this study could provide some important references for clinical use of dentin desensitizing agent. 1. As to the three desensitizers, the hypersensitive dentin model was built to evaluate their sealing effects on exposed tubule by SEM observation and dentin permeability analysis. All of them could significantly reduce the dentin permeability. 2. Test specimens of three groups treated by desensitizers were subjected to aging treatment with 5000 times thermal cycling and toothbrush abrasion, and then dentin permeability was measured to evaluate the sealing durability of these three desensitizers on exposed tubule. The sealing durability of three groups were different. 3. The post-operative dentin model was built to evaluate the sealing effects of the three desensitizers on post-operative dentin by SEM and methylene blue. All of three desensitizers could reduce the dentin permeability significantly. 4. The influences of three desensitizers on the bonding efficiency of total-etch and self-etch adhesives were evaluated with the micro-tensile bond strength study and bond interface morphology observation. The dentin bond strength for Green or group was significantly lower than the other two groups (P<0.05).Keywords: dentin, desensitizer, dentin permeability, thermal cycling, micro-tensile bond strength
Procedia PDF Downloads 394390 Monitoring of Wound Healing Through Structural and Functional Mechanisms Using Photoacoustic Imaging Modality
Authors: Souradip Paul, Arijit Paramanick, M. Suheshkumar Singh
Abstract:
Traumatic injury is the leading worldwide health problem. Annually, millions of surgical wounds are created for the sake of routine medical care. The healing of these unintended injuries is always monitored based on visual inspection. The maximal restoration of tissue functionality remains a significant concern of clinical care. Although minor injuries heal well with proper care and medical treatment, large injuries negatively influence various factors (vasculature insufficiency, tissue coagulation) and cause poor healing. Demographically, the number of people suffering from severe wounds and impaired healing conditions is burdensome for both human health and the economy. An incomplete understanding of the functional and molecular mechanism of tissue healing often leads to a lack of proper therapies and treatment. Hence, strong and promising medical guidance is necessary for monitoring the tissue regeneration processes. Photoacoustic imaging (PAI), is a non-invasive, hybrid imaging modality that can provide a suitable solution in this regard. Light combined with sound offers structural, functional and molecular information from the higher penetration depth. Therefore, molecular and structural mechanisms of tissue repair will be readily observable in PAI from the superficial layer and in the deep tissue region. Blood vessel formation and its growth is an essential tissue-repairing components. These vessels supply nutrition and oxygen to the cell in the wound region. Angiogenesis (formation of new capillaries from existing blood vessels) contributes to new blood vessel formation during tissue repair. The betterment of tissue healing directly depends on angiogenesis. Other optical microscopy techniques can visualize angiogenesis in micron-scale penetration depth but are unable to provide deep tissue information. PAI overcomes this barrier due to its unique capability. It is ideally suited for deep tissue imaging and provides the rich optical contrast generated by hemoglobin in blood vessels. Hence, an early angiogenesis detection method provided by PAI leads to monitoring the medical treatment of the wound. Along with functional property, mechanical property also plays a key role in tissue regeneration. The wound heals through a dynamic series of physiological events like coagulation, granulation tissue formation, and extracellular matrix (ECM) remodeling. Therefore tissue elasticity changes, can be identified using non-contact photoacoustic elastography (PAE). In a nutshell, angiogenesis and biomechanical properties are both critical parameters for tissue healing and these can be characterized in a single imaging modality (PAI).Keywords: PAT, wound healing, tissue coagulation, angiogenesis
Procedia PDF Downloads 106389 Assessment of the Masticatory Muscle Function in Young Adults Following SARS-CoV-2 Infection
Authors: Mimoza Canga, Edit Xhajanka, Irene Malagnino
Abstract:
The COVID-19 pandemic has had a significant influence on the lives of millions of people and is a threat to public health. SARS-CoV-2 infection has been associated with a number of health problems, including damage to the lungs and central nervous system damage. Additionally, it can also cause oral health problems, such as pain and weakening of the chewing muscles. The purpose of the study is the assessment of the masticatory muscle function in young adults between 18 and 29 years old following SARS-CoV-2 infection. Materials and methods: This study is quantitative cross-sectional research conducted in Albania between March 2023 and September 2023. Our research involved a total of 104 students who participated in our research, of which 64 were female (61.5%) and 40 were male (38.5%). They were divided into four age groups: 18-20, 21-23, 24-26, and 27-29 years old. In this study, the students willingly consented to take part in this study and were guaranteed that their participation would remain anonymous. The study recorded no dropouts, and it was carried out in compliance with the Declaration of Helsinki. Statistical analysis was conducted using IBM SPSS Statistics Version 23.0 on Microsoft Windows Linux, Chicago, IL, USA. Data were evaluated utilizing analysis of variance (ANOVA), with a significance level set at P ≤ 0.05. Results: 80 (76.9%) of the participants who had passed COVID-19 reported chronic masticatory muscle pain (P < 0.0001) and masticatory muscle spasms (P = 0.002). According to data analysis, 70 (67.3%) of the participants had a sore throat (P=0.007). 74% of the students reported experiencing weakness in their chewing muscles (P=0.003). The participants reported having undergone the following treatments: azithromycin (500 mg daily), prednisolone sodium phosphate (15 mg/5 mL daily), Augmentin tablets (625 mg), vitamin C (1000 mg), magnesium sulfate (4 g/100 mL), oral vitamin D3 supplementation of 5000 IU daily, ibuprofen (400 mg every 6 hours), and tizanidine (2 mg every 6 hours). Conclusion: This study, conducted in Albania, has limitations, but it can be concluded that COVID-19 directly affects the functioning of the masticatory muscles.Keywords: Albania, chronic pain, COVID-19, cross-sectional study, masticatory muscles, spasm
Procedia PDF Downloads 35388 Assessment of the Environmental Compliance at the Jurassic Production Facilities towards HSE MS Procedures and Kuwait Environment Public Authority Regulations
Authors: Fatemah Al-Baroud, Sudharani Shreenivas Kshatriya
Abstract:
Kuwait Oil Company (KOC) is one of the companies for gas & oil production in Kuwait. The oil and gas industry is truly global; with operations conducted in every corner of the globe, the global community will rely heavily on oil and gas supplies. KOC has made many commitments to protect all due to KOC’s operations and operational releases. As per KOC’s strategy, the substantial increase in production activities will bring many challenges in managing various environmental hazards and stresses in the company. In order to handle those environmental challenges, the need of implementing effectively the health, safety, and environmental management system (HSEMS) is significant. And by implementing the HSEMS system properly, the environmental aspects of the activities, products, and services were identified, evaluated, and controlled in order to (i) Comply with local regulatory and other obligatory requirements; (ii) Comply with company policy and business requirements; and (iii) Reduce adverse environmental impact, including adverse impact to company reputation. Assessments for the Jurassic Production Facilities are being carried out as a part of the KOC HSEMS procedural requirement and monitoring the implementation of the relevant HSEMS procedures in the facilities. The assessments have been done by conducting series of theme audits using KOC’s audit protocol at JPFs. The objectives of the audits are to evaluate the compliance of the facilities towards the implementation of environmental procedures and the status of the KEPA requirement at all JPFs. The list of the facilities that were covered during the theme audit program are the following: (1) Jurassic Production Facility (JPF) – Sabriya (2) Jurassic Production Facility (JPF) – East Raudhatian (3) Jurassic Production Facility (JPF) – West Raudhatian (4)Early Production Facility (EPF 50). The auditing process comprehensively focuses on the application of KOC HSE MS procedures at JPFs and their ability to reduce the resultant negative impacts on the environment from the operations. Number of findings and observations were noted and highlighted in the audit reports and sent to all concerned controlling teams. The results of these audits indicated that the facilities, in general view, were in line with KOC HSE Procedures, and there were commitments in documenting all the HSE issues in the right records and plans. Further, implemented several control measures at JPFs that minimized/reduced the environmental impact, such as SRU were installed for sulphur recovery. Future scope and monitoring audit after a sufficient period of time will be carried out in conjunction with the controlling teams in order to verify the current status of the recommendations and evaluate the contractors' performance towards the required actions in preserving the environment.Keywords: assessment of the environmental compliance, environmental and social impact assessment, kuwait environment public authority regulations, health, safety and environment management procedures, jurassic production facilities
Procedia PDF Downloads 187387 Ecosystem Approach in Aquaculture: From Experimental Recirculating Multi-Trophic Aquaculture to Operational System in Marsh Ponds
Abstract:
Integrated multi-trophic aquaculture (IMTA) is used to reduce waste from aquaculture and increase productivity by co-cultured species. In this study, we designed a recirculating multi-trophic aquaculture system which requires low energy consumption, low water renewal and easy-care. European seabass (Dicentrarchus labrax) were raised with co-cultured sea urchin (Paracentrotus lividus), deteritivorous polychaete fed on settled particulate matter, mussels (Mytilus galloprovincialis) used to extract suspended matters, macroalgae (Ulva sp.) used to uptake dissolved nutrients and gastropod (Phorcus turbinatus) used to clean the series of 4 tanks from fouling. Experiment was performed in triplicate during one month in autumn under an experimental greenhouse at the Institute Océanographique Paul Ricard (IOPR). Thanks to the absence of a physical filter, any pomp was needed to pressure water and the water flow was carried out by a single air-lift followed by gravity flow.Total suspended solids (TSS), biochemical oxygen demand (BOD5), turbidity, phytoplankton estimation and dissolved nutrients (ammonium NH₄, nitrite NO₂⁻, nitrate NO₃⁻ and phosphorus PO₄³⁻) were measured weekly while dissolved oxygen and pH were continuously recorded. Dissolved nutrients stay under the detectable threshold during the experiment. BOD5 decreased between fish and macroalgae tanks. TSS highly increased after 2 weeks and then decreased at the end of the experiment. Those results show that bioremediation can be well used for aquaculture system to keep optimum growing conditions. Fish were the only feeding species by an external product (commercial fish pellet) in the system. The others species (extractive species) were fed from waste streams from the tank above or from Ulva produced by the system for the sea urchin. In this way, between the fish aquaculture only and the addition of the extractive species, the biomass productivity increase by 5.7. In other words, the food conversion ratio dropped from 1.08 with fish only to 0.189 including all species. This experimental recirculating multi-trophic aquaculture system was efficient enough to reduce waste and increase productivity. In a second time, this technology has been reproduced at a commercial scale. The IOPR in collaboration with Les 4 Marais company run for 6 month a recirculating IMTA in 8000 m² of water allocate between 4 marsh ponds. A similar air-lift and gravity recirculating system was design and only one feeding species of shrimp (Palaemon sp.) was growth for 3 extractive species. Thanks to this joint work at the laboratory and commercial scales we will be able to challenge IMTA system and discuss about this sustainable aquaculture technology.Keywords: bioremediation, integrated multi-trophic aquaculture (IMTA), laboratory and commercial scales, recirculating aquaculture, sustainable
Procedia PDF Downloads 152386 Spatial Distribution, Characteristics, and Pollution Risk Assessment of Microplastics in Sediments from Karnaphuli River Estuary, Bangladesh
Authors: Md. Refat Jahan Rakiba, M. Belal Hossaina, Rakesh Kumarc, Md. Akram Ullaha, Sultan Al Nahiand, Nazmun Naher Rimaa, Tasrina Rabia Choudhury, Samia Islam Libaf, Jimmy Yub, Mayeen Uddin Khandakerg, Abdelmoneim Suliemanh, Mohamed Mahmoud Sayedi
Abstract:
Microplastics (MPs) have become an emerging global pollutant due to their wide spread and dispersion and potential threats to marine ecosystems. However, studies on MPs of estuarine and coastal ecosystems of Bangladesh are very limited or not available. In this study, we conducted the first study on the abundance, distribution, characteristics and potential risk assessment of microplastics in the sediment of Karnaphuli River estuary, Bangladesh. Microplastic particles were extracted from sediments of 30 stations along the estuary by density separation, and then enumerated and characterize by using steromicroscope and Fourier Transform Infrared (FT-IR) spectroscopy. In the collected sediment, the number of MPs varied from 22.29 - 59.5 items kg−1 of dry weight (DW) with an average of 1177 particles kg−1 DW. The mean abundance was higher in the downstream and left bank of estuary where the predominant shape, colour, and size of MPs were films (35%), white (19%), and >5000 μm (19%), respectively. The main polymer types were polyethylene terephthalate, polystyrene, polyethylene, cellulose, and nylon. MPs were found to pose risks (low to high) in the sediment of the estuary, with the highest risk occuring at one station near a sewage outlet, according to the results of risk analyses using the pollution risk index (PRI), polymer risk index (H), contamination factors (CFs), and pollution load index (PLI). The single value index, PLI clearly demonastated that all sampling sites were considerably polluted (as PLI >1) with microplastics. H values showed toxic polymers even in lower proportions possess higher polymeric hazard scores and vice versa. This investigation uncovered new insights on the status of MPs in the sediments of Karnaphuli River estuary, laying the groundwork for future research and control of microplastic pollution and management.Keywords: microplastics, polymers, pollution risk assessment, Karnaphuli esttuary
Procedia PDF Downloads 81385 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability
Authors: Akshay B. Pawar, Rohit Y. Parasnis
Abstract:
Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot
Procedia PDF Downloads 324384 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring
Authors: Zheng Wang, Zhenhong Li, Jon Mills
Abstract:
Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring
Procedia PDF Downloads 163383 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 319382 Children's Literature with Mathematical Dialogue for Teaching Mathematics at Elementary Level: An Exploratory First Phase about Students’ Difficulties and Teachers’ Needs in Third and Fourth Grade
Authors: Goulet Marie-Pier, Voyer Dominic, Simoneau Victoria
Abstract:
In a previous research project (2011-2019) funded by the Quebec Ministry of Education, an educational approach was developed based on the teaching and learning of place value through children's literature. Subsequently, the effect of this approach on the conceptual understanding of the concept among first graders (6-7 years old) was studied. The current project aims to create a series of children's literature to help older elementary school students (8-10 years old) in developing a conceptual understanding of complex mathematical concepts taught at their grade level rather than a more typical procedural understanding. Knowing that there are no educational material or children's books that exist to achieve our goals, four stories, accompanied by mathematical activities, will be created to support students, and their teachers, in the learning and teaching of mathematical concepts that can be challenging within their mathematic curriculum. The stories will also introduce a mathematical dialogue into the characters' discourse with the aim to address various mathematical foundations for which there are often erroneous statements among students and occasionally among teachers. In other words, the stories aim to empower students seeking a real understanding of difficult mathematical concepts, as well as teachers seeking a way to teach these difficult concepts in a way that goes beyond memorizing rules and procedures. In order to choose the concepts that will be part of the stories, it is essential to understand the current landscape regarding the main difficulties experienced by students in third and fourth grade (8-10 years old) and their teacher’s needs. From this perspective, the preliminary phase of the study, as discussed in the presentation, will provide critical insight into the mathematical concepts with which the target grade levels struggle the most. From this data, the research team will select the concepts and develop their stories in the second phase of the study. Two questions are preliminary to the implementation of our approach, namely (1) what mathematical concepts are considered the most “difficult to teach” by teachers in the third and fourth grades? and (2) according to teachers, what are the main difficulties encountered by their students in numeracy? Self-administered online questionnaires using the SimpleSondage software will be sent to all third and fourth-grade teachers in nine school service centers in the Quebec region, representing approximately 300 schools. The data that will be collected in the fall of 2022 will be used to compare the difficulties identified by the teachers with those prevalent in the scientific literature. Considering that this ensures consistency between the proposed approach and the true needs of the educational community, this preliminary phase is essential to the relevance of the rest of the project. It is also an essential first step in achieving the two ultimate goals of the research project, improving the learning of elementary school students in numeracy, and contributing to the professional development of elementary school teachers.Keywords: children’s literature, conceptual understanding, elementary school, learning and teaching, mathematics
Procedia PDF Downloads 90381 Investigation of the Self-Healing Sliding Wear Characteristics of Niti-Based PVD Coatings on Tool Steel
Authors: Soroush Momeni
Abstract:
Excellent damping capacity and superelasticity of the bulk NiTi shape memory alloy (SMA) makes it a suitable material of choice for tools in machining process as well as tribological systems. Although thin film of NiTi SMA has a same damping capacity as NiTi bulk alloys, it has a poor mechanical properties and undesirable tribological performance. This study aims at eliminating these application limitations for NiTi SMA thin films. In order to achieve this goal, NiTi thin films were magnetron sputtered as an interlayer between reactively sputtered hard TiCN coatings and hard work tool steel substrates. The microstructure, composition, crystallographic phases, mechanical and tribological properties of the deposited thin films were analyzed by using field emission scanning electron microscopy (FESEM), X-ray diffraction (XRD), nanoindentation, ball–on-disc, scratch test, and three dimensional (3D) optical microscopy. It was found that under a specific coating architecture, the superelasticity of NiTi inter-layer can be combined with high hardness and wear resistance of TiCN protective layers. The obtained results revealed that the thickness of NiTi interlayers is an important factor controlling mechanical and tribological performance of bi-layer composite coating systems.Keywords: PVD coatings, sliding wear, hardness, tool steel
Procedia PDF Downloads 285380 Safety Assessment of Traditional Ready-to-Eat Meat Products Vended at Retail Outlets in Kebbi and Sokoto States, Nigeria
Authors: M. I. Ribah, M. Jibir, Y. A. Bashar, S. S. Manga
Abstract:
Food safety is a significant and growing public health problem in the world and Nigeria as a developing country, since food-borne diseases are important contributors to the huge burden of sickness and death of humans. In Nigeria, traditional ready-to-eat meat products (RTE-MPs) like balangu, tsire, guru and dried meat products like kilishi, dambun nama, banda, were reported to be highly appreciated because of their eating qualities. The consumption of these products was considered as safe due to the treatments that are usually involved during their production process. However, during processing and handling, the products could be contaminated by pathogens that could cause food poisoning. Therefore, a hazard identification for pathogenic bacteria on some traditional RTE-MPs was conducted in Kebbi and Sokoto States, Nigeria. A total of 116 RTE-MPs (balangu-38, kilishi-39 and tsire-39) samples were obtained from retail outlets and analyzed using standard cultural microbiological procedures in general and selective enrichment media to isolate the target pathogens. A six-fold serial dilution was prepared and using the pour plating method, colonies were counted. Serial dilutions were selected based on the prepared pre-labeled Petri dishes for each sample. A volume of 10-12 ml of molten Nutrient agar cooled to 42-45°C was poured into each Petri dish and 1 ml each from dilutions of 102, 104 and 106 for every sample was respectively poured on a pre-labeled Petri plate after which colonies were counted. The isolated pathogens were identified and confirmed after series of biochemical tests. Frequencies and percentages were used to describe the presence of pathogens. The General Linear Model was used to analyze data on pathogen presence according to RTE-MPs and means were separated using the Tukey test at 0.05 confidence level. Of the 116 RTE-MPs samples collected, 35 (30.17%) samples were found to be contaminated with some tested pathogens. Prevalence results showed that Escherichia coli, salmonella and Staphylococcus aureus were present in the samples. Mean total bacterial count was 23.82×106 cfu/g. The frequency of individual pathogens isolated was; Staphylococcus aureus 18 (15.51%), Escherichia coli 12 (10.34%) and Salmonella 5 (4.31%). Also, among the RTE-MPs tested, the total bacterial counts were found to differ significantly (P < 0.05), with 1.81, 2.41 and 2.9×104 cfu/g for tsire, kilishi, and balangu, respectively. The study concluded that the presence of pathogenic bacteria in balangu could pose grave health risks to consumers, and hence, recommended good manufacturing practices in the production of balangu to improve the products’ safety.Keywords: ready-to-eat meat products, retail outlets, public health, safety assessment
Procedia PDF Downloads 134379 Theoretical and Experimental Analysis of Hard Material Machining
Authors: Rajaram Kr. Gupta, Bhupendra Kumar, T. V. K. Gupta, D. S. Ramteke
Abstract:
Machining of hard materials is a recent technology for direct production of work-pieces. The primary challenge in machining these materials is selection of cutting tool inserts which facilitates an extended tool life and high-precision machining of the component. These materials are widely for making precision parts for the aerospace industry. Nickel-based alloys are typically used in extreme environment applications where a combination of strength, corrosion resistance and oxidation resistance material characteristics are required. The present paper reports the theoretical and experimental investigations carried out to understand the influence of machining parameters on the response parameters. Considering the basic machining parameters (speed, feed and depth of cut) a study has been conducted to observe their influence on material removal rate, surface roughness, cutting forces and corresponding tool wear. Experiments are designed and conducted with the help of Central Composite Rotatable Design technique. The results reveals that for a given range of process parameters, material removal rate is favorable for higher depths of cut and low feed rate for cutting forces. Low feed rates and high values of rotational speeds are suitable for better finish and higher tool life.Keywords: speed, feed, depth of cut, roughness, cutting force, flank wear
Procedia PDF Downloads 285378 Immune Modulation and Cytomegalovirus Reactivation in Sepsis-Induced Immunosuppression
Authors: G. Lambe, D. Mansukhani, A. Shetty, S. Khodaiji, C. Rodrigues, F. Kapadia
Abstract:
Introduction: Sepsis is known to cause impairment of both innate and adaptive immunity and involves an early uncontrolled inflammatory response, followed by a protracting immunosuppression phase, which includes decreased expression of cell receptors, T cell anergy and exhaustion, impaired cytokine production, which may cause high risk for secondary infections due to reduced response to antigens. Although human cytomegalovirus (CMV) is widely recognized as a serious viral pathogen in sepsis and immunocompromised patients, the incidence of CMV reactivation in patients with sepsis lacking strong evidence of immunosuppression is not well defined. Therefore, it is important to determine an association between CMV reactivation and sepsis-induced immunosuppression. Aim: To determine the association between incidence of CMV reactivation and immune modulation in sepsis-induced immunosuppression with time. Material and Methods: Ten CMV-seropositive adult patients with severe sepsis were included in this study. Blood samples were collected on Day 0, and further weekly up to 21 days. CMV load was quantified by real-time PCR using plasma. The expression of immunosuppression markers, namely, HLA-DR, PD-1, and regulatory T cells, were determined by flow cytometry using whole blood. Results: At Day 0, no CMV reactivation was observed in 6/10 patients. In these patients, the median length for reactivation was 14 days (range, 7-14 days). The remaining four patients, at Day 0, had a mean viral load of 1802+2599 copies/ml, which increased with time. At Day 21, the mean viral load for all 10 patients was 60949+179700 copies/ml, indicating that viremia increased with the length of stay in the hospital. HLA-DR expression on monocytes significantly increased from Day 0 to Day 7 (p = 0.001), following which no significant change was observed until Day 21, for all patients except 3. In these three patients, HLA-DR expression on monocytes showed a decrease at elevated viral load (>5000 copies/ml), indicating immune suppression. However, the other markers, PD-1 and regulatory T cells, did not show any significant changes. Conclusion: These preliminary findings suggest that CMV reactivation can occur in patients with severe sepsis. In fact, the viral load continued to increase with the length of stay in the hospital. Immune suppression, indicated by decreased expression of HLA-DR alone, was observed in three patients with elevated viral load.Keywords: CMV reactivation, immune suppression, sepsis immune modulation, CMV viral load
Procedia PDF Downloads 150377 Synthesis, Molecular Modeling and Study of 2-Substituted-4-(Benzo[D][1,3]Dioxol-5-Yl)-6-Phenylpyridazin-3(2H)-One Derivatives as Potential Analgesic and Anti-Inflammatory Agents
Authors: Jyoti Singh, Ranju Bansal
Abstract:
Fighting pain and inflammation is a common problem faced by physicians while dealing with a wide variety of diseases. Since ancient time nonsteroidal anti-inflammatory agents (NSAIDs) and opioids have been the cornerstone of treatment therapy, however, the usefulness of both these classes is limited due to severe side effects. NSAIDs, which are mainly used to treat mild to moderate inflammatory pain, induce gastric irritation and nephrotoxicity whereas opioids show an array of adverse reactions such as respiratory depression, sedation, and constipation. Moreover, repeated administration of these drugs induces tolerance to the analgesic effects and physical dependence. Further discovery of selective COX-2 inhibitors (coxibs) suggested safety without any ulcerogenic side effects; however, long-term use of these drugs resulted in kidney and hepatic toxicity along with an increased risk of secondary cardiovascular effects. The basic approaches towards inflammation and pain treatment are constantly changing, and researchers are continuously trying to develop safer and effective anti-inflammatory drug candidates for the treatment of different inflammatory conditions such as osteoarthritis, rheumatoid arthritis, ankylosing spondylitis, psoriasis and multiple sclerosis. Synthetic 3(2H)-pyridazinones constitute an important scaffold for drug discovery. Structure-activity relationship studies on pyridazinones have shown that attachment of a lactam at N-2 of the pyridazinone ring through a methylene spacer results in significantly increased anti-inflammatory and analgesic properties of the derivatives. Further introduction of the heterocyclic ring at lactam nitrogen results in improvement of biological activities. Keeping in mind these SAR studies, a new series of compounds were synthesized as shown in scheme 1 and investigated for anti-inflammatory, analgesic, anti-platelet activities and docking studies. The structures of newly synthesized compounds have been established by various spectroscopic techniques. All the synthesized pyridazinone derivatives exhibited potent anti-inflammatory and analgesic activity. Homoveratryl substituted derivative was found to possess highest anti-inflammatory and analgesic activity displaying 73.60 % inhibition of edema at 40 mg/kg with no ulcerogenic activity when compared to standard drugs indomethacin. Moreover, 2-substituted-4-benzo[d][1,3]dioxole-6-phenylpyridazin-3(2H)-ones derivatives did not produce significant changes in bleeding time and emerged as safe agents. Molecular docking studies also illustrated good binding interactions at the active site of the cyclooxygenase-2 (hCox-2) enzyme.Keywords: anti-inflammatory, analgesic, pyridazin-3(2H)-one, selective COX-2 inhibitors
Procedia PDF Downloads 201376 MXene Mediated Layered 2D-3D-2D g-C3N4@WO3@Ti3C2 Multijunctional Heterostructure with Enhanced Photoelectrochemical and Photocatalytic Properties
Authors: Lekgowa Collen Makola, Cecil Naphtaly Moro Ouma, Sharon Moeno, Langelihle Dlamini
Abstract:
In recent years, advancement in the field of nanotechnology has evolved new strategies to address energy and environmental issues. Amongst the developing technologies, visible-light-driven photocatalysis is regarded as a sustainable approach for energy production and environmental detoxifications, where transition metal oxides (TMOs) and metal-free carbon-based semiconductors such as graphitic carbon nitride (CN) evidenced notable potential in this matter. Herein, g-C₃N₄@WO₃@Ti₃C₂Tx three-component multijunction photocatalyst was fabricated via facile ultrasonic-assisted self-assembly, followed by calcination to facilitate extensive integrations of the materials. A series of different Ti₃C₂ wt% loading in the g-C₃N4@WO₃@Ti₃C₂Tx were prepared and represented as 1-CWT, 3-CWT, 5-CWT, and 7-CWT corresponding to 1, 3, 5, and 7wt%, respectively. Systematic characterization using spectroscopic and microscopic techniques were employed to validate the successful preparation of the photocatalysts. Enhanced optoelectronic and photoelectrochemical properties were observed for the WO₃@Ti₃C2@g-C₃N4 heterostructure with respect to the individual materials. Photoluminescence spectra and Nyquist plots show restrained recombination rates and improved photocarrier conductivities, respectively, and this was credited to the synergistic coupling effect and the presence of highly conductive Ti₃C2 MXene. The strong interfacial contact surfaces upon the formation of the composite were confirmed using XPS. Multiple charge transfer mechanisms were proposed for the WO3@Ti3C₂@g-C3N4, which couples Z-scheme and Schottky-junction mediated with Ti3C2 MXene. Bode phase plots show improved charge carrier life-times upon the formation of the multijunctional photocatalyst. Moreover, transient photocurrent density of 7-CWT is 40 and seven (7) times higher compared to that of g-C₃N4 and WO3, correspondingly. Unlike in the traditional Z-Scheme, the formed ternary heterostructure possesses interfaces through the metallic 2D Ti₃C₂ MXene, which provided charge transfer channels for efficient photocarrier transfers with carrier concentrations (ND) of 17.49×1021 cm-3 and 4.86% photo-to-chemical conversion efficiency. The as-prepared ternary g-C₃N₄@WO₃@Ti₃C₂Tx exhibited excellent photoelectrochemical properties with reserved redox band potential potencies to facilitate efficient photo-oxidation and -reduction reactions. The fabricated multijunction photocatalyst exhibits potentials to be used in an extensive range of photocatalytic process vis., production of valuable hydrocarbons from CO₂, production of H₂, and degradation of a plethora of pollutants from wastewater.Keywords: photocatalysis, Z-scheme, multijunction heterostructure, Ti₃C₂ MXene, g-C₃N₄
Procedia PDF Downloads 126375 Assumption of Cognitive Goals in Science Learning
Authors: Mihail Calalb
Abstract:
The aim of this research is to identify ways for achieving sustainable conceptual understanding within science lessons. For this purpose, a set of teaching and learning strategies, parts of the theory of visible teaching and learning (VTL), is studied. As a result, a new didactic approach named "learning by being" is proposed and its correlation with educational paradigms existing nowadays in science teaching domain is analysed. In the context of VTL the author describes the main strategies of "learning by being" such as guided self-scaffolding, structuring of information, and recurrent use of previous knowledge or help seeking. Due to the synergy effect of these learning strategies applied simultaneously in class, the impact factor of learning by being on cognitive achievement of students is up to 93 % (the benchmark level is equal to 40% when an experienced teacher applies permanently the same conventional strategy during two academic years). The key idea in "learning by being" is the assumption by the student of cognitive goals. From this perspective, the article discusses the role of student’s personal learning effort within several teaching strategies employed in VTL. The research results emphasize that three mandatory student – related moments are present in each constructivist teaching approach: a) students’ personal learning effort, b) student – teacher mutual feedback and c) metacognition. Thus, a successful educational strategy will target to achieve an involvement degree of students into the class process as high as possible in order to make them not only know the learning objectives but also to assume them. In this way, we come to the ownership of cognitive goals or students’ deep intrinsic motivation. A series of approaches are inherent to the students’ ownership of cognitive goals: independent research (with an impact factor on cognitive achievement equal to 83% according to the results of VTL); knowledge of success criteria (impact factor – 113%); ability to reveal similarities and patterns (impact factor – 132%). Although it is generally accepted that the school is a public service, nonetheless it does not belong to entertainment industry and in most of cases the education declared as student – centered actually hides the central role of the teacher. Even if there is a proliferation of constructivist concepts, mainly at the level of science education research, we have to underline that conventional or frontal teaching, would never disappear. Research results show that no modern method can replace an experienced teacher with strong pedagogical content knowledge. Such a teacher will inspire and motivate his/her students to love and learn physics. The teacher is precisely the condensation point for an efficient didactic strategy – be it constructivist or conventional. In this way, we could speak about "hybridized teaching" where both the student and the teacher have their share of responsibility. In conclusion, the core of "learning by being" approach is guided learning effort that corresponds to the notion of teacher–student harmonic oscillator, when both things – guidance from teacher and student’s effort – are equally important.Keywords: conceptual understanding, learning by being, ownership of cognitive goals, science learning
Procedia PDF Downloads 170374 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series
Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold
Abstract:
To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network
Procedia PDF Downloads 141373 A Study on the Overall Enhancement Strategy of Mountainous Urban Stairway Space Based on Environmental Behavioral Science: Taking the Shibati as an Example
Authors: Hao Fu
Abstract:
Mountain city stairway space is a unique spatial form of mountainous cities represented by Chongqing,this form of space is caused by the topography of high and low in the city, people mainly regarded it as a transportation space initially, but with the progress of the society and the rapid development of the city, people's demand for space is more composite, and at the same time, the function of the mountain city stairway space is also constantly transforming and integrating to become more comprehensive and diversified. The Shibati(eighteen stairs in Chinese) located in Chongqing Yuzhong Peninsula is one of the typical representatives. As a typical stairway space in Chongqing, the Shibati has precious historical significance and cultural value. Due to the change of time, the Shibati has gone through several repairs and renovations, and due to the dilapidated houses and inconvenient transportation, more than 90% of the original inhabitants have long been relocated, and the vast majority of the original buildings have been bulldozed and demolished, leaving only a few historical buildings. In 2021, a Beijing-based design company completed the renovation of the core buildings of the Shibati in Chongqing, and a large number of various kinds of catering and entertainment businesses have been introduced into the building, which has become a representative staircase space of the central part of the Chongqing district. Through the field research, the author personally experienced and perceived the spatial vitality of the Shibati, marveled at the rich commercial atmosphere, but still found many problems, such as the lack of traditional memories of the Shibati caused by the large-scale demolition and construction, the internal commercial space and form of the stairway space is “Netflix”-like and uniform, the lack of regional characteristics, the incomplete spatial sequence, insufficient open space, etc. The author also found that the space of the stairway space is very similar to the traditional space of the Shibati, and that there is a lack of traditional commercial spaces. The space sequence is incomplete, and the open space is insufficient. Based on this kind of phenomenon, the author carries out research and discussion in this paper, through the technical route of “raising problems → analyzing problems → solving problems”, collating existing theoretical information, combining the results of field research, and finally arriving at a series of measures for the inheritance of spatial memories and spatial vitality enhancement of eighteen ladders, and hoping to provide some references for the renewal and renovation design of similar ladders in the mountainous city. It is hoped to provide some references for the design of similar mountain city stairways in terms of renewal and renovation.Keywords: spatial memory, environmental behavioral science, mountain cities, stairway space, spatial enhancement
Procedia PDF Downloads 2