Search results for: energy performance assessment
1226 Mitochondrial DNA Defect and Mitochondrial Dysfunction in Diabetic Nephropathy: The Role of Hyperglycemia-Induced Reactive Oxygen Species
Authors: Ghada Al-Kafaji, Mohamed Sabry
Abstract:
Mitochondria are the site of cellular respiration and produce energy in the form of adenosine triphosphate (ATP) via oxidative phosphorylation. They are the major source of intracellular reactive oxygen species (ROS) and are also direct target to ROS attack. Oxidative stress and ROS-mediated disruptions of mitochondrial function are major components involved in the pathogenicity of diabetic complications. In this work, the changes in mitochondrial DNA (mtDNA) copy number, biogenesis, gene expression of mtDNA-encoded subunits of electron transport chain (ETC) complexes, and mitochondrial function in response to hyperglycemia-induced ROS and the effect of direct inhibition of ROS on mitochondria were investigated in an in vitro model of diabetic nephropathy using human renal mesangial cells. The cells were exposed to normoglycemic and hyperglycemic conditions in the presence and absence of Mn(III)tetrakis(4-benzoic acid) porphyrin chloride (MnTBAP) or catalase for 1, 4 and 7 days. ROS production was assessed by the confocal microscope and flow cytometry. mtDNA copy number and PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 transcripts, were all analyzed by real-time PCR. PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 proteins, were analyzed by Western blotting. Mitochondrial function was determined by assessing mitochondrial membrane potential and adenosine triphosphate (ATP) levels. Hyperglycemia-induced a significant increase in the production of mitochondrial superoxide and hydrogen peroxide at day 1 (P < 0.05), and this increase remained significantly elevated at days 4 and 7 (P < 0.05). The copy number of mtDNA and expression of PGC-1a, NRF-1, and TFAM as well as ND2, CYTB, CO1 and ATPase 6 increased after one day of hyperglycemia (P < 0.05), with a significant reduction in all those parameters at 4 and 7 days (P < 0.05). The mitochondrial membrane potential decreased progressively at 1 to 7 days of hyperglycemia with the parallel progressive reduction in ATP levels over time (P < 0.05). MnTBAP and catalase treatment of cells cultured under hyperglycemic conditions attenuated ROS production reversed renal mitochondrial oxidative stress and improved mtDNA, mitochondrial biogenesis, and function. These results show that hyperglycemia-induced ROS caused an early increase in mtDNA copy number, mitochondrial biogenesis and mtDNA-encoded gene expression of the ETC subunits in human mesangial cells as a compensatory response to the decline in mitochondrial function, which precede the mtDNA defect and mitochondrial dysfunction with a progressive oxidative response. Protection from ROS-mediated damage to renal mitochondria induced by hyperglycemia may be a novel therapeutic approach for the prevention/treatment of DN.Keywords: diabetic nephropathy, hyperglycemia, reactive oxygen species, oxidative stress, mtDNA, mitochondrial dysfunction, manganese superoxide dismutase, catalase
Procedia PDF Downloads 2471225 Evaluating the Success of an Intervention Course in a South African Engineering Programme
Authors: Alessandra Chiara Maraschin, Estelle Trengove
Abstract:
In South Africa, only 23% of engineering students attain their degrees in the minimum time of 4 years. This begs the question: Why is the 4-year throughput rate so low? Improving the throughput rate is crucial in assisting students to the shortest possible path to completion. The Electrical Engineering programme has a fixed curriculum and students must pass all courses in order to graduate. In South Africa, as is the case in several other countries, many students rely on external funding such as bursaries from companies in industry. If students fail a course, they often lose their bursaries, and most might not be able to fund their 'repeating year' fees. It is thus important to improve the throughput rate, since for many students, graduating from university is a way out of poverty for an entire family. In Electrical Engineering, it has been found that the Software Development I course (an introduction to C++ programming) is a significant hurdle course for students and has been found to have a low pass rate. It has been well-documented that students struggle with this type of course as it introduces a number of new threshold concepts that can be challenging to grasp in a short time frame. In an attempt to mitigate this situation, a part-time night-school for Software Development I was introduced in 2015 as an intervention measure. The course includes all the course material from the Software Development I module and allows students who failed the course in first semester a second chance by repeating the course through taking the night-school course. The purpose of this study is to determine whether the introduction of this intervention course could be considered a success. The success of the intervention is assessed in two ways. The study will first look at whether the night-school course contributed to improving the pass rate of the Software Development I course. Secondly, the study will examine whether the intervention contributed to improving the overall throughput from the 2nd year to the 3rd year of study at a South African University. Second year academic results for a sample of 1216 students have been collected from 2010-2017. Preliminary results show that the lowest pass rate for Software Development I was found to be in 2017 with a pass rate of 34.9%. Since the intervention course's inception, the pass rate for Software Development I has increased each year from 2015-2017 by 13.75%, 25.53% and 25.81% respectively. To conclude, the preliminary results show that the intervention course is a success in improving the pass rate of Software Development I.Keywords: academic performance, electrical engineering, engineering education, intervention course, low pass rate, software development course, throughput
Procedia PDF Downloads 1641224 Recirculated Sedimentation Method to Control Contamination for Algal Biomass Production
Authors: Ismail S. Bostanci, Ebru Akkaya
Abstract:
Microalgae-derived biodiesel, fertilizer or industrial chemicals' production with wastewater has great potential. Especially water from a municipal wastewater treatment plant is a very important nutrient source for biofuel production. Microalgae biomass production in open ponds system is lower cost culture systems. There are many hurdles for commercial algal biomass production in large scale. One of the important technical bottlenecks for microalgae production in open system is culture contamination. The algae culture contaminants can generally be described as invading organisms which could cause pond crash. These invading organisms can be competitors, parasites, and predators. Contamination is unavoidable in open systems. Potential contaminant organisms are already inoculated if wastewater is utilized for algal biomass cultivation. Especially, it is important to control contaminants to retain in acceptable level in order to reach true potential of algal biofuel production. There are several contamination management methods in algae industry, ranging from mechanical, chemical, biological and growth condition change applications. However, none of them are accepted as a suitable contamination control method. This experiment describes an innovative contamination control method, 'Recirculated Sedimentation Method', to manage contamination to avoid pond cash. The method can be used for the production of algal biofuel, fertilizer etc. and algal wastewater treatment. To evaluate the performance of the method on algal culture, an experiment was conducted for 90 days at a lab-scale raceway (60 L) reactor with the use of non-sterilized and non-filtered wastewater (secondary effluent and centrate of anaerobic digestion). The application of the method provided the following; removing contaminants (predators and diatoms) and other debris from reactor without discharging the culture (with microscopic evidence), increasing raceway tank’s suspended solids holding capacity (770 mg L-1), increasing ammonium removal rate (29.83 mg L-1 d-1), decreasing algal and microbial biofilm formation on inner walls of reactor, washing out generated nitrifier from reactor to prevent ammonium consumption.Keywords: contamination control, microalgae culture contamination, pond crash, predator control
Procedia PDF Downloads 2071223 Effect of Maturation on the Characteristics and Physicochemical Properties of Banana and Its Starch
Authors: Chien-Chun Huang, P. W. Yuan
Abstract:
Banana is one of the important fruits which constitute a valuable source of energy, vitamins and minerals and an important food component throughout the world. The fruit ripening and maturity standards vary from country to country depending on the expected shelf life of market. During ripening there are changes in appearance, texture and chemical composition of banana. The changes of component of banana during ethylene-induced ripening are categorized as nutritive values and commercial utilization. The objectives of this study were to investigate the changes of chemical composition and physicochemical properties of banana during ethylene-induced ripening. Green bananas were harvested and ripened by ethylene gas at low temperature (15℃) for seven stages. At each stage, banana was sliced and freeze-dried for banana flour preparation. The changes of total starch, resistant starch, chemical compositions, physicochemical properties, activity of amylase, polyphenolic oxidase (PPO) and phenylalanine ammonia lyase (PAL) of banana were analyzed each stage during ripening. The banana starch was isolated and analyzed for gelatinization properties, pasting properties and microscopic appearance each stage of ripening. The results indicated that the highest total starch and resistant starch content of green banana were 76.2% and 34.6%, respectively at the harvest stage. Both total starch and resistant starch content were significantly declined to 25.3% and 8.8%, respectively at the seventh stage. Soluble sugars content of banana increased from 1.21% at harvest stage to 37.72% at seventh stage during ethylene-induced ripening. Swelling power of banana flour decreased with the progress of ripening stage, but solubility increased. These results strongly related with the decreases of starch content of banana flour during ethylene-induced ripening. Both water insoluble and alcohol insoluble solids of banana flour decreased with the progress of ripening stage. Both activity of PPO and PAL increased, but the total free phenolics content decreased, with the increases of ripening stages. As ripening stage extended, the gelatinization enthalpy of banana starch significantly decreased from 15.31 J/g at the harvest stage to 10.55 J/g at the seventh stage. The peak viscosity and setback increased with the progress of ripening stages in the pasting properties of banana starch. The highest final viscosity, 5701 RVU, of banana starch slurry was found at the seventh stage. The scanning electron micrograph of banana starch showed the shapes of banana starch appeared to be round and elongated forms, ranging in 10-50 μm at the harvest stage. As the banana closed to ripe status, some parallel striations were observed on the surface of banana starch granular which could be caused by enzyme reaction during ripening. These results inferred that the highest resistant starch was found in the green banana could be considered as a potential application of healthy foods. The changes of chemical composition and physicochemical properties of banana could be caused by the hydrolysis of enzymes during the ethylene-induced ripening treatment.Keywords: maturation of banana, appearance, texture, soluble sugars, resistant starch, enzyme activities, physicochemical properties of banana starch
Procedia PDF Downloads 3181222 Development of the Food Market of the Republic of Kazakhstan in the Field of Milk Processing
Authors: Gulmira Zhakupova, Tamara Tultabayeva, Aknur Muldasheva, Assem Sagandyk
Abstract:
The development of technology and production of products with increased biological value based on the use of natural food raw materials are important tasks in the policy of the food market of the Republic of Kazakhstan. For Kazakhstan, livestock farming, in particular sheep farming, is the most ancient and developed industry and way of life. The history of the Kazakh people is largely connected with this type of agricultural production, with established traditions using dairy products from sheep's milk. Therefore, the development of new technologies from sheep’s milk remains relevant. In addition, one of the most promising areas for the development of food technology for therapeutic and prophylactic purposes is sheep milk products as a source of protein, immunoglobulins, minerals, vitamins, and other biologically active compounds. This article presents the results of research on the study of milk processing technology. The objective of the study is to study the possibilities of processing sheep milk and its role in human nutrition, as well as the results of research to improve the technology of sheep milk products. The studies were carried out on the basis of sanitary and hygienic requirements for dairy products in accordance with the following test methods. To perform microbiological analysis, we used the method for identifying Salmonella bacteria (Horizontal method for identifying, counting, and serotyping Salmonella) in a certain mass or volume of product. Nutritional value is a complex of properties of food products that meet human physiological needs for energy and basic nutrients. The protein mass fraction was determined by the Kjeldahl method. This method is based on the mineralization of a milk sample with concentrated sulfuric acid in the presence of an oxidizing agent, an inert salt - potassium sulfate, and a catalyst - copper sulfate. In this case, the amino groups of the protein are converted into ammonium sulfate dissolved in sulfuric acid. The vitamin composition was determined by HPLC. To determine the content of mineral substances in the studied samples, the method of atomic absorption spectrophotometry was used. The study identified the technological parameters of sheep milk products and determined the prospects for researching sheep milk products. Microbiological studies were used to determine the safety of the study product. According to the results of the microbiological analysis, no deviations from the norm were identified. This means high safety of the products under study. In terms of nutritional value, the resulting products are high in protein. Data on the positive content of amino acids were also obtained. The results obtained will be used in the food industry and will serve as recommendations for manufacturers.Keywords: dairy, milk processing, nutrition, colostrum
Procedia PDF Downloads 571221 An Automatic Large Classroom Attendance Conceptual Model Using Face Counting
Authors: Sirajdin Olagoke Adeshina, Haidi Ibrahim, Akeem Salawu
Abstract:
large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM.Keywords: automatic attendance, face detection, haar-like cascade, manual attendance
Procedia PDF Downloads 721220 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver
Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera
Abstract:
In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids
Procedia PDF Downloads 4161219 The Evaluation of Complete Blood Cell Count-Based Inflammatory Markers in Pediatric Obesity and Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Obesity is defined as a severe chronic disease characterized by a low-grade inflammatory state. Therefore, inflammatory markers gained utmost importance during the evaluation of obesity and metabolic syndrome (MetS), a disease characterized by central obesity, elevated blood pressure, increased fasting blood glucose and elevated triglycerides or reduced high density lipoprotein cholesterol (HDL-C) values. Some inflammatory markers based upon complete blood cell count (CBC) are available. In this study, it was questioned which inflammatory marker was the best to evaluate the differences between various obesity groups. 514 pediatric individuals were recruited. 132 children with MetS, 155 morbid obese (MO), 90 obese (OB), 38 overweight (OW) and 99 children with normal BMI (N-BMI) were included into the scope of this study. Obesity groups were constituted using age- and sex-dependent body mass index (BMI) percentiles tabulated by World Health Organization. MetS components were determined to be able to specify children with MetS. CBC were determined using automated hematology analyzer. HDL-C analysis was performed. Using CBC parameters and HDL-C values, ratio markers of inflammation, which cover neutrophil-to-lymphocyte ratio (NLR), derived neutrophil-to-lymphocyte ratio (dNLR), platelet-to-lymphocyte ratio (PLR), lymphocyte-to-monocyte ratio (LMR), monocyte-to-HDL-C ratio (MHR) were calculated. Statistical analyses were performed. The statistical significance degree was considered as p < 0.05. There was no statistically significant difference among the groups in terms of platelet count, neutrophil count, lymphocyte count, monocyte count, and NLR. PLR differed significantly between OW and N-BMI as well as MetS. Monocyte-to HDL-C value exhibited statistical significance between MetS and N-BMI, OB, and MO groups. HDL-C value differed between MetS and N-BMI, OW, OB, MO groups. MHR was the ratio, which exhibits the best performance among the other CBC-based inflammatory markers. On the other hand, when MHR was compared to HDL-C only, it was suggested that HDL-C has given much more valuable information. Therefore, this parameter still keeps its value from the diagnostic point of view. Our results suggest that MHR can be an inflammatory marker during the evaluation of pediatric MetS, but the predictive value of this parameter was not superior to HDL-C during the evaluation of obesity.Keywords: children, complete blood cell count, high density lipoprotein cholesterol, metabolic syndrome, obesity
Procedia PDF Downloads 1301218 Study and Simulation of a Sever Dust Storm over West and South West of Iran
Authors: Saeed Farhadypour, Majid Azadi, Habibolla Sayyari, Mahmood Mosavi, Shahram Irani, Aliakbar Bidokhti, Omid Alizadeh Choobari, Ziba Hamidi
Abstract:
In the recent decades, frequencies of dust events have increased significantly in west and south west of Iran. First, a survey on the dust events during the period (1990-2013) is investigated using historical dust data collected at 6 weather stations scattered over west and south-west of Iran. After statistical analysis of the observational data, one of the most severe dust storm event that occurred in the region from 3rd to 6th July 2009, is selected and analyzed. WRF-Chem model is used to simulate the amount of PM10 and how to transport it to the areas. The initial and lateral boundary conditions for model obtained from GFS data with 0.5°×0.5° spatial resolution. In the simulation, two aerosol schemas (GOCART and MADE/SORGAM) with 3 options (chem_opt=106,300 and 303) were evaluated. Results of the statistical analysis of the historical data showed that south west of Iran has high frequency of dust events, so that Bushehr station has the highest frequency between stations and Urmia station has the lowest frequency. Also in the period of 1990 to 2013, the years 2009 and 1998 with the amounts of 3221 and 100 respectively had the highest and lowest dust events and according to the monthly variation, June and July had the highest frequency of dust events and December had the lowest frequency. Besides, model results showed that the MADE / SORGAM scheme has predicted values and trends of PM10 better than the other schemes and has showed the better performance in comparison with the observations. Finally, distribution of PM10 and the wind surface maps obtained from numerical modeling showed that the formation of dust plums formed in Iraq and Syria and also transportation of them to the West and Southwest of Iran. In addition, comparing the MODIS satellite image acquired on 4th July 2009 with model output at the same time showed the good ability of WRF-Chem in simulating spatial distribution of dust.Keywords: dust storm, MADE/SORGAM scheme, PM10, WRF-Chem
Procedia PDF Downloads 2721217 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility
Procedia PDF Downloads 1311216 Bioremediation of Phenol in Wastewater Using Polymer-Supported Bacteria
Authors: Areej K. Al-Jwaid, Dmitiry Berllio, Andrew Cundy, Irina Savina, Jonathan L. Caplin
Abstract:
Phenol is a toxic compound that is widely distributed in the environment including the atmosphere, water and soil, due to the release of effluents from the petrochemical and pharmaceutical industries, coking plants and oil refineries. Moreover, a range of daily products, using phenol as a raw material, may find their way into the environment without prior treatment. The toxicity of phenol effects both human and environment health, and various physio-chemical methods to remediate phenol contamination have been used. While these techniques are effective, their complexity and high cost had led to search for alternative strategies to reduce and eliminate high concentrations of phenolic compounds in the environment. Biological treatments are preferable because they are environmentally friendly and cheaper than physico-chemical approaches. Some microorganisms such as Pseudomonas sp., Rhodococus sp., Acinetobacter sp. and Bacillus sp. have shown a high ability to degrade phenolic compounds to provide a sole source of energy. Immobilisation process utilising various materials have been used to protect and enhance the viability of cells, and to provide structural support for the bacterial cells. The aim of this study is to develop a new approach to the bioremediation of phenol based on an immobilisation strategy that can be used in wastewater. In this study, two bacterial species known to be phenol degrading bacteria (Pseudomonas mendocina and Rhodococus koreensis) were purchased from National Collection of Industrial, Food and Marine Bacteria (NCIMB). The two species and mixture of them were immobilised to produce macro porous crosslinked cell cryogels samples by using four types of cross-linker polymer solutions in a cryogelation process. The samples were used in a batch culture to degrade phenol at an initial concentration of 50mg/L at pH 7.5±0.3 and a temperature of 30°C. The four types of polymer solution - i. glutaraldehyde (GA), ii. Polyvinyl alcohol with glutaraldehyde (PVA+GA), iii. Polyvinyl alcohol–aldehyde (PVA-al) and iv. Polyetheleneimine–aldehyde (PEI-al), were used at different concentrations, ranging from 0.5 to 1.5% to crosslink the cells. The results of SEM and rheology analysis indicated that cell-cryogel samples crosslinked with the four cross-linker polymers formed monolithic macro porous cryogels. The samples were evaluated for their ability to degrade phenol. Macro porous cell–cryogels crosslinked with GA and PVA+GA showed an ability to degrade phenol for only one week, while the other samples crosslinked with a combination of PVA-al + PEI-al at two different concentrations have shown higher stability and viability to reuse to degrade phenol at concentration (50 mg/L) for five weeks. The initial results of using crosslinked cell cryogel samples to degrade phenol indicate that is a promising tool for bioremediation strategies especially to eliminate and remove the high concentration of phenol in wastewater.Keywords: bioremediation, crosslinked cells, immobilisation, phenol degradation
Procedia PDF Downloads 2341215 Lung Function, Urinary Heavy Metals And ITS Other Influencing Factors Among Community In Klang Valley
Authors: Ammar Amsyar Abdul Haddi, Mohd Hasni Jaafar
Abstract:
Heavy metals are elements naturally presented in the environment that can cause adverse effect to health. But not much literature was found on effects toward lung function, where impairment of lung function may lead to various lung diseases. The objective of the study is to explore the lung function impairment, urinary heavy metal level, and its associated factors among the community in Klang valley, Malaysia. Sampling was done in Kuala Lumpur suburb public and housing areas during community events throughout March 2019 till October 2019. respondents who gave the consent were given a questionnaire to answer and was proceeded with a lung function test. Urine samples were obtained at the end of the session and sent for Inductively coupled plasma mass spectrometry (ICP-MS) analysis for heavy metal cadmium (Cd) and lead (Pb) concentration. A total of 200 samples were analysed, and of all, 52% of respondents were male, Age ranging from 18 years old to 74 years old with a mean age of 38.44. Urinary samples show that 12% of the respondent (n=22) has Cd level above than average, and 1.5 % of the respondent (n=3) has urinary Pb at an above normal level. Bivariate analysis show that there was a positive correlation between urinary Cd and urinary Pb (r= 0.309; p<0.001). Furthermore, there was a negative correlation between urinary Cd level and full vital capacity (FVC) (r=-0.202, p=0.004), Force expiratory volume at 1 second (FEV1) (r = -0.225, p=0.001), and also with Force expiratory flow between 25-75% FVC (FEF25%-75%) (r= -0.187, p=0.008). however, urinary Pb did not show any association with FVC, FEV1, FEV1/FVC, or FEF25%-75%. Multiple linear regression analysis shows that urinary Cd remained significant and negatively affect FVC% (p=0.025) and FEV1% (p=0.004) achieved from the predicted value. On top of that, other factors such as education level (p=0.013) and duration of smoking(p=0.003) may influencing both urinary Cd and performance in lung function as well, suggesting Cd as a potential mediating factor between smoking and impairment of lung function. however, there was no interaction detected between heavy metal or other influencing factor in this study. In short, there is a negative linear relationship detected between urinary Cd and lung function, and urinary Cd is likely to affects lung function in a restrictive pattern. Since smoking is also an influencing factor for urinary Cd and lung function impairment, it is highly suggested that smokers should be screened for lung function and urinary Cd level in the future for early disease prevention.Keywords: lung function, heavy metals, community
Procedia PDF Downloads 1561214 Acceptability of ‘Fish Surimi Peptide’ in Under Five Children Suffering from Moderate Acute Malnutrition in Bangladesh
Authors: M. Iqbal Hossain, Azharul Islam Khan, S. M. Rafiqul Islam, Tahmeed Ahmed
Abstract:
Objective: Moderate acute malnutrition (MAM) is a major cause of morbidity and mortality in under-5 children of low-income countries. Approximately 14.6% of all under-5 mortality worldwide is attributed to MAM with >3 times increased risk of death compared to well-nourished peers. Prevalence of MAM among under-5 children in Bangladesh is ~12% (~1.7 million). Providing a diet containing adequate nutrients is the mainstay of treatment of children with MAM. It is now possible to process fish into fish peptides with longer shelf-life without refrigerator, known as ‘Fish Surimi peptide’ and this could be an attractive alternative to supply fish protein in the diet of children in low-income countries like Bangladesh. We conducted this study to assess the acceptability of Fish Surimi peptide given with various foods/meals in 2-5 years old children with MAM. Design/methods: Fish Surimi peptide is broken down from white fish meat using plant-derived enzyme and the ingredient is just fish meat consisted of 20 different kinds of amino acids including nine essential amino acids. In a convenience sample of 34 children we completed the study ward of Dhaka Hospital of icddr,b in Bangladesh during November 2014 through February 2015. For each child the study was for two consecutive days: i.e. direct observation of food intake of two lunches and two suppers. In a randomly and blinded manner and cross over design an individual child received Fish Surimi peptide (5g at lunch and 5g at supper) mixed meal [e.g. 30g rice and 30g dahl (thick lentil soup) or 60g of a vegetables-lentil-rice mixed local dish known as khichuri in one day and the same meal on other day without any Fish Surimi peptide. We observed the completeness and eagerness of eating and any possible side effect (e.g. allergy, vomiting, diarrhea etc.) over these two days. Results: The mean±SD age of the enrolled children was 38.4±9.4 months, weight 11.22±1.41 kg, height 91.0±6.3 cm, and WHZ was -2.13±0.76. Their mean±SD total feeding time (minutes) for lunch was 25.4±13.6 vs. 20.6±11.1 (p=0.130) and supper was 22.3±9.7 vs. 19.7±11.2 (p=0.297), and total amount (g) of food eaten in lunch and supper was found similar 116.1±7.0 vs. 117.7±8.0 (p=3.01) in A (Fish Surimi) and B group respectively. Score in Hedonic scale by mother on test of food given to children at lunch or supper was 3.9±0.2 vs. 4.0±0.2 (p=0.317) and on overall acceptance (including the texture, smell, and appearance) of food at lunch or supper was 3.9±0.2 vs. 4.0±0.2 (p=0.317) for A and B group respectively. No adverse event was observed in any food group during the study period. Conclusions: Fish Surimi peptide may be a cost effective supplementary food, which should be tested by appropriately designed randomized community level intervention trial both in wasted children and stunted children.Keywords: protein-energy malnutrition, moderate acute malnutrition, weight-for-height z-score, mid upper arm circumference, acceptability, fish surimi peptide, under-5 children
Procedia PDF Downloads 4131213 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots
Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha
Abstract:
Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.Keywords: biosensor, dopamine, fluorescence, quantum dots
Procedia PDF Downloads 3651212 The Effectiveness of Prefabricated Vertical Drains for Accelerating Consolidation of Tunis Soft Soil
Authors: Marwa Ben Khalifa, Zeineb Ben Salem, Wissem Frikha
Abstract:
The purpose of the present work is to study the consolidation behavior of highly compressible Tunis soft soil “TSS” by means of prefabricated vertical drains (PVD’s) associated to preloading based on laboratory and field investigations. In the first hand, the field performance of PVD’s on the layer of Tunis soft soil was analysed based on the case study of the construction of embankments of “Radès la Goulette” bridge project. PVD’s Geosynthetics drains types were installed with triangular grid pattern until 10 m depth associated with step-by-step surcharge. The monitoring of the soil settlement during preloading stage for Radès La Goulette Bridge project was provided by an instrumentation composed by various type of tassometer installed in the soil. The distribution of water pressure was monitored through piezocone penetration. In the second hand, a laboratory reduced tests are performed on TSS subjected also to preloading and improved with PVD's Mebradrain 88 (Mb88) type. A specific test apparatus was designed and manufactured to study the consolidation. Two series of consolidation tests were performed on TSS specimens. The first series included consolidation tests for soil improved by one central drain. In thesecond series, a triangular mesh of three geodrains was used. The evolution of degree of consolidation and measured settlements versus time derived from laboratory tests and field data were presented and discussed. The obtained results have shown that PVD’s have considerably accelerated the consolidation of Tunis soft soil by shortening the drainage path. The model with mesh of three drains gives results more comparative to field one. A longer consolidation time is observed for the cell improved by a single central drain. A comparison with theoretical analysis, basically that of Barron (1948) and Carillo (1942), was presented. It’s found that these theories overestimate the degree of consolidation in the presence of PVD.Keywords: tunis soft soil, prefabricated vertical drains, acceleration of consolidation, dissipation of excess pore water pressures, radès bridge project, barron and carillo’s theories
Procedia PDF Downloads 1271211 Development and Characterization of Expandable TPEs Compounds for Footwear Applications
Authors: Ana Elisa Ribeiro Costa, Sónia Daniela Ferreira Miranda, João Pedro De Carvalho Pereira, João Carlos Simões Bernardo
Abstract:
Elastomeric thermoplastics (TPEs) have been widely used in the footwear industry over the years. Recently this industry has been requesting materials that can combine lightweight and high abrasion resistance. Although there are blowing agents on the market to improve the lightweight, when these are incorporated into molten polymers during the extrusion or injection molding, it is necessary to have some specific processing conditions (e.g. effect of temperature and hydrodynamic stresses) to obtain good properties and acceptable surface appearance on the final products. Therefore, it is a great advantage for the compounder industry to acquire compounds that already include the blowing agents. In this way, they can be handled and processed under the same conditions as a conventional raw material. In this work, the expandable TPEs compounds, namely a TPU and a SEBS, with the incorporation of blowing agents, have been developed through a co-rotating modular twin-screw parallel extruder. Different blowing agents such as thermo-expandable microspheres and an azodicarbonamide were selected and different screw configurations and temperature profiles were evaluated since these parameters have a particular influence on the expansion inhibition of the blowing agents. Furthermore, percentages of incorporation were varied in order to investigate their influence on the final product properties. After the extrusion of these compounds, expansion was tested by the injection process. The mechanical and physical properties were characterized by different analytical methods like tensile, flexural and abrasive tests, determination of hardness and density measurement. Also, scanning electron microscopy (SEM) was performed. It was observed that it is possible to incorporate the blowing agents on the TPEs without their expansion on the extrusion process. Only with reprocessing (injection molding) did the expansion of the agents occur. These results are corroborated by SEM micrographs, which show a good distribution of blowing agents in the polymeric matrices. The other experimental results showed a good mechanical performance and its density decrease (30% for SEBS and 35% for TPU). This study suggested that it is possible to develop optimized compounds for footwear applications (e.g., sole shoes), which only will be able to expand during the injection process.Keywords: blowing agents, expandable thermoplastic elastomeric compounds, low density, footwear applications
Procedia PDF Downloads 2081210 Overcoming Challenges of Teaching English as a Foreign Language in Technical Classrooms: A Case Study at TVTC College of Technology
Authors: Sreekanth Reddy Ballarapu
Abstract:
The perception of the whole process of teaching and learning is undergoing a drastic and radical change. More and more student-centered, pragmatic, and flexible approaches are gradually replacing teacher-centered lecturing and structural-syllabus instruction. The issue of teaching English as a Foreign language is no exception in this regard. The traditional Present-Practice-Produce (P-P-P) method of teaching English is overtaken by Task-Based Teaching which is a subsidiary branch of Communicative Language Teaching. At this juncture this article strongly tries to convey that - Task-based learning, has an advantage over other traditional methods of teaching. All teachers of English must try to customize their texts into productive tasks, apply them, and evaluate the students as well as themselves. Task Based Learning is a double edged tool which can enhance the performance of both the teacher and the taught. The sample for this case study is a class of 35 students from Semester III - Network branch at TVTC College of Technology, Adhum - Kingdom of Saudi Arabia. The students are high school passed out and aged between 19-21years.For the present study the prescribed textbook Technical English 1 by David Bonamy was used and a number of language tasks were chalked out during the pre- task stage and the learners were made to participate voluntarily and actively. The Action Research methodology was adopted within the dual framework of Communicative Language Teaching and Task-Based Learning. The different tools such as questionnaires, feedback and interviews were used to collect data. This study provides information about various techniques of Communicative Language Teaching and Task Based Learning and focuses primarily on the advantages of using a Task Based Learning approach. This article presents in detail the objectives of the study, the planning and implementation of the action research, the challenges encountered during the execution of the plan, and the pedagogical outcome of this project. These research findings serve two purposes: first, it evaluates the effectiveness of Task Based Learning and, second, it empowers the teacher's professionalism in designing and implementing the tasks. In the end, the possibility of scope for further research is presented in brief.Keywords: action research, communicative language teaching, task based learning, perception
Procedia PDF Downloads 2381209 Development of a PJWF Cleaning Method for Wet Electrostatic Precipitators
Authors: Hsueh-Hsing Lu, Thi-Cuc Le, Tung-Sheng Tsai, Chuen-Jinn Tsai
Abstract:
This study designed and tested a novel wet electrostatic precipitators (WEP) system featuring a Pulse-Air-Jet-Assisted Water Flow (PJWF) to shorten water cleaning time, reduce water usage, and maintain high particle removal efficiency. The PJWF injected cleaning water tangentially at the cylinder wall, rapidly enhancing the momentum of the water flow for efficient dust cake removal. Each PJWF cycle uses approximately 4.8 liters of cleaning water in 18 seconds. Comprehensive laboratory tests were conducted using a single-tube WEP prototype within a flow rate range of 3.0 to 6.0 cubic meters per minute(CMM), operating voltages between -35 to -55 kV, and high-frequency power supply. The prototype, consisting of 72 sets of double-spike rigid discharge electrodes, demonstrated that with the PJWF, -35 kV, and 3.0 CMM, the PM2.5 collection efficiency remained as high as the initial value of 88.02±0.92% after loading with Al2O3 particles at 35.75± 2.54 mg/Nm3 for 20-hr continuous operation. In contrast, without the PJWF, the PM2.5 collection efficiency drastically dropped from 87.4% to 53.5%. Theoretical modeling closely matched experimental results, confirming the robustness of the system's design and its scalability for larger industrial applications. Future research will focus on optimizing the PJWF system, exploring its performance with various particulate matter, and ensuring long-term operational stability and reliability under diverse environmental conditions. Recently, this WEP was combined with a preceding CT (cooling tower) and a HWS (honeycomb wet scrubber) and pilot-tested (40 CMM) to remove SO2 and PM2.5 emissions in a sintering plant of an integrated steel making plant. Pilot-test results showed that the removal efficiencies for SO2 and PM2.5 emissions are as high as 99.7 and 99.3 %, respectively, with ultralow emitted concentrations of 0.3 ppm and 0.07 mg/m3, respectively, while the white smoke is also eliminated at the same time. These new technologies are being used in the industry and the application in different fields is expected to be expanded to reduce air pollutant emissions substantially for a better ambient air quality.Keywords: wet electrostatic precipitator, pulse-air-jet-assisted water flow, particle removal efficiency, air pollution control
Procedia PDF Downloads 201208 Backwash Optimization for Drinking Water Treatment Biological Filters
Authors: Sarra K. Ikhlef, Onita Basu
Abstract:
Natural organic matter (NOM) removal efficiency using drinking water treatment biological filters can be highly influenced by backwashing conditions. Backwashing has the ability to remove the accumulated biomass and particles in order to regenerate the biological filters' removal capacity and prevent excessive headloss buildup. A lab scale system consisting of 3 biological filters was used in this study to examine the implications of different backwash strategies on biological filtration performance. The backwash procedures were evaluated based on their impacts on dissolved organic carbon (DOC) removals, biological filters’ biomass, backwash water volume usage, and particle removal. Results showed that under nutrient limited conditions, the simultaneous use of air and water under collapse pulsing conditions lead to a DOC removal of 22% which was significantly higher (p>0.05) than the 12% removal observed under water only backwash conditions. Employing a bed expansion of 20% under nutrient supplemented conditions compared to a 30% reference bed expansion while using the same amount of water volume lead to similar DOC removals. On the other hand, utilizing a higher bed expansion (40%) lead to significantly lower DOC removals (23%). Also, a backwash strategy that reduced the backwash water volume usage by about 20% resulted in similar DOC removals observed with the reference backwash. The backwash procedures investigated in this study showed no consistent impact on biological filters' biomass concentrations as measured by the phospholipids and the adenosine tri-phosphate (ATP) methods. Moreover, none of these two analyses showed a direct correlation with DOC removal. On the other hand, dissolved oxygen (DO) uptake showed a direct correlation with DOC removals. The addition of the extended terminal subfluidization wash (ETSW) demonstrated no apparent impact on DOC removals. ETSW also successfully eliminated the filter ripening sequence (FRS). As a result, the additional water usage resulting from implementing ETSW was compensated by water savings after restart. Results from this study provide insight to researchers and water treatment utilities on how to better optimize the backwashing procedure for the goal of optimizing the overall biological filtration process.Keywords: biological filtration, backwashing, collapse pulsing, ETSW
Procedia PDF Downloads 2741207 A Review of Lexical Retrieval Intervention in Primary Progressive Aphasia and Alzheimer's Disease: Mechanisms of Change, Cognition, and Generalisation
Authors: Ashleigh Beales, Anne Whitworth, Jade Cartwright
Abstract:
Background: While significant benefits of lexical retrieval intervention are evident within the Primary Progressive Aphasia (PPA) and Alzheimer’s disease (AD) literature, an understanding of the mechanisms that underlie change or improvement is limited. Change mechanisms have been explored in the non-progressive post-stroke literature that may offer insight into how interventions affect change with progressive language disorders. The potential influences of cognitive factors may also play a role here, interacting with the aims of intervention. Exploring how such processes have been applied is likely to grow our understanding of how interventions have, or have not, been effective, and how and why generalisation is likely, or not, to occur. Aims: This review of the literature aimed to (1) investigate the proposed mechanisms of change which underpin lexical interventions, mapping the PPA and AD lexical retrieval literature to theoretical accounts of mechanisms that underlie change within the broader intervention literature, (2) identify whether and which nonlinguistic cognitive functions have been engaged in intervention with these populations and any proposed influence, and (3) explore evidence of linguistic generalisation, with particular reference to change mechanisms employed in interventions. Main contribution: A search of Medline, PsycINFO, and CINAHL identified 36 articles that reported data for individuals with PPA or AD following lexical retrieval intervention. A review of the mechanisms of change identified 10 studies that used stimulation, 21 studies utilised relearning, three studies drew on reorganisation, and two studies used cognitive-relay. Significant treatment gains, predominantly based on linguistic performance measures, were reported for all client groups for each of the proposed mechanisms. Reorganisation and cognitive-relay change mechanisms were only targeted in PPA. Eighteen studies incorporated nonlinguistic cognitive functions in intervention; these were limited to autobiographical memory (16 studies), episodic memory (three studies), or both (one study). Linguistic generalisation outcomes were inconsistently reported in PPA and AD studies. Conclusion: This review highlights that individuals with PPA and AD may benefit from lexical retrieval intervention, irrespective of the mechanism of change. Thorough application of a theory of intervention is required to gain a greater understanding of the change mechanisms, as well as the interplay of nonlinguistic cognitive functions.Keywords: Alzheimer's disease, lexical retrieval, mechanisms of change, primary progressive aphasia
Procedia PDF Downloads 2031206 Uncertainty Evaluation of Erosion Volume Measurement Using Coordinate Measuring Machine
Authors: Mohamed Dhouibi, Bogdan Stirbu, Chabotier André, Marc Pirlot
Abstract:
Internal barrel wear is a major factor affecting the performance of small caliber guns in their different life phases. Wear analysis is, therefore, a very important process for understanding how wear occurs, where it takes place, and how it spreads with the aim on improving the accuracy and effectiveness of small caliber weapons. This paper discusses the measurement and analysis of combustion chamber wear for a small-caliber gun using a Coordinate Measuring Machine (CMM). Initially, two different NATO small caliber guns: 5.56x45mm and 7.62x51mm, are considered. A Micura Zeiss Coordinate Measuring Machine (CMM) equipped with the VAST XTR gold high-end sensor is used to measure the inner profile of the two guns every 300-shot cycle. The CMM parameters, such us (i) the measuring force, (ii) the measured points, (iii) the time of masking, and (iv) the scanning velocity, are investigated. In order to ensure minimum measurement error, a statistical analysis is adopted to select the reliable CMM parameters combination. Next, two measurement strategies are developed to capture the shape and the volume of each gun chamber. Thus, a task-specific measurement uncertainty (TSMU) analysis is carried out for each measurement plan. Different approaches of TSMU evaluation have been proposed in the literature. This paper discusses two different techniques. The first is the substitution method described in ISO 15530 part 3. This approach is based on the use of calibrated workpieces with similar shape and size as the measured part. The second is the Monte Carlo simulation method presented in ISO 15530 part 4. Uncertainty evaluation software (UES), also known as the Virtual Coordinate Measuring Machine (VCMM), is utilized in this technique to perform a point-by-point simulation of the measurements. To conclude, a comparison between both approaches is performed. Finally, the results of the measurements are verified through calibrated gauges of several dimensions specially designed for the two barrels. On this basis, an experimental database is developed for further analysis aiming to quantify the relationship between the volume of wear and the muzzle velocity of small caliber guns.Keywords: coordinate measuring machine, measurement uncertainty, erosion and wear volume, small caliber guns
Procedia PDF Downloads 1521205 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 751204 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate
Procedia PDF Downloads 1251203 An Exploratory Study of Entrepreneurial Satisfaction among Older Founders
Authors: Catarina Seco Matos, Miguel Amaral
Abstract:
The developed world is facing falling birth rates and rising life expectancies. As a result, the overall demographic structure of societies is becoming markedly older. This leads to an economic and political pressure towards the extension of individuals’ working lives. On the other hand, evidence shows that some older workers choose to stay in the labour force as employees, whereas others choose to pursue a more entrepreneurial occupational path. Thus, entrepreneurship or self-employment may be an option for socioeconomic participation of older individuals. Previous research on senior entrepreneurship is scarce and it focuses mainly on entrepreneurship determinants and individuals’ intentions. The fact that entrepreneurship is perceived as a voluntary or involuntary decision or as a positive or a negative outcome by older individuals is, to the best of our knowledge, still unexplored in the literature. In order to analyse the determinants of entrepreneurial satisfaction among older individuals, primary data were obtained from a unique questionnaire survey, which was sent to Portuguese senior entrepreneurs who have launched their company aged 50 and over (N=181). Portugal is one of the countries in the world with the with the largest ageing population and with a high proportion of older individuals who remain active after their official retirement age – which makes it an extremely relevant case study on senior entrepreneurship. Findings suggest that non pecuniary factors (rather than pecuniary) are the main driver for entrepreneurship at older ages. Specifically, results show that the will to remain active is the main motivation of older individuals to become entrepreneurs. This is line with the activity and continuity theories. Furthermore, senior entrepreneurs tend to have had an active working life (using their professional experience as a proxy) and, thus, want to keep the same lifestyle at an older age (in line with theory of continuity). Finally, results show that even though older individuals’ companies may not show the best financial performance that does not seem to affect their satisfaction with the company and with entrepreneurship in general. The present study aims at exploring, discussing and bring new research on senior entrepreneurship to the fore, rather than assuming purely deductive approach; hence, further confirmatory analyses with larger sets from different countries of data are required.Keywords: active ageing, entrepreneurship, older entrepreneur, Portugal, satisfaction, senior entrepreneur
Procedia PDF Downloads 2371202 Computational Fluid Dynamics Simulation of Turbulent Convective Heat Transfer in Rectangular Mini-Channels for Rocket Cooling Applications
Authors: O. Anwar Beg, Armghan Zubair, Sireetorn Kuharat, Meisam Babaie
Abstract:
In this work, motivated by rocket channel cooling applications, we describe recent CFD simulations of turbulent convective heat transfer in mini-channels at different aspect ratios. ANSYS FLUENT software has been employed with a mean average error of 5.97% relative to Forrest’s MIT cooling channel study (2014) at a Reynolds number of 50,443 with a Prandtl number of 3.01. This suggests that the simulation model created for turbulent flow was suitable to set as a foundation for the study of different aspect ratios in the channel. Multiple aspect ratios were also considered to understand the influence of high aspect ratios to analyse the best performing cooling channel, which was determined to be the highest aspect ratio channels. Hence, the approximate 28:1 aspect ratio provided the best characteristics to ensure effective cooling. A mesh convergence study was performed to assess the optimum mesh density to collect accurate results. Hence, for this study an element size of 0.05mm was used to generate 579,120 for proper turbulent flow simulation. Deploying a greater bias factor would increase the mesh density to the furthest edges of the channel which would prove to be useful if the focus of the study was just on a single side of the wall. Since a bulk temperature is involved with the calculations, it is essential to ensure a suitable bias factor is used to ensure the reliability of the results. Hence, in this study we have opted to use a bias factor of 5 to allow greater mesh density at both edges of the channel. However, the limitations on mesh density and hardware have curtailed the sophistication achievable for the turbulence characteristics. Also only linear rectangular channels were considered, i.e. curvature was ignored. Furthermore, we only considered conventional water coolant. From this CFD study the variation of aspect ratio provided a deeper appreciation of the effect of small to high aspect ratios with regard to cooling channels. Hence, when considering an application for the channel, the geometry of the aspect ratio must play a crucial role in optimizing cooling performance.Keywords: rocket channel cooling, ANSYS FLUENT CFD, turbulence, convection heat transfer
Procedia PDF Downloads 1501201 Using Mind Map Technique to Enhance Medical Vocabulary Retention for the First Year Nursing Students at a Higher Education Institution
Authors: Nguyen Quynh Trang, Nguyễn Thị Hông Nhung
Abstract:
The study aimed to identify the effectiveness of using the mind map technique to enhance students’ medical vocabulary retention among a group of students at a higher education institution - Thai Nguyen University of Medicine and Pharmacy during the first semester of the school year 2022-2023. The research employed a quasi-experimental method, exploring primary sources such as questionnaires and the analyzed results of pre-and-post tests. Almost teachers and students showed high preferences for the implementation of the mind map technique in language teaching and learning. Furthermore, results from the pre-and-post tests between the experimental group and control one pointed out that this technique brought back positive academic performance in teaching and learning English. The research findings revealed that there should be more supportive policies to evoke the use of the mind map technique in a pedagogical context. Aim of the Study: The purpose of this research was to investigate whether using mind mapping can help students to enhance nursing students’ medical vocabulary retention and to assess the students’ attitudes toward using mind mapping as a tool to improve their vocabulary. The methodology of the study: The research employed a quasi-experimental method, exploring primary sources such as questionnaires and the analyzed results of pre-and-post tests. The contribution of the study: The research contributed to the innovation of teaching vocabulary methods for English teachers at a higher education institution. Moreover, the research helped the English teachers and the administrators at a university evoke and maintain the motivation of students not only in English classes but also in other subjects. The findings of this research were beneficial to teachers, students, and researchers interested in using mind mapping to teach and learn English vocabulary. The research explored and proved the effectiveness of applying mind mapping in teaching and learning English vocabulary. Therefore, teaching and learning activities were conducted more and more effectively and helped students overcome challenges in remembering vocabulary and creating motivation to learn English vocabulary.Keywords: medical vocabulary retention, mind map technique, nursing students, medical vocabulary
Procedia PDF Downloads 761200 Density Functional Theory Study of the Surface Interactions between Sodium Carbonate Aerosols and Fission Products
Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez
Abstract:
The interaction of fission products (FP) with sodium carbonate (Na₂CO₃) aerosols is of a high safety concern because of their potential role in the radiological source term mitigation by FP trapping. In a sodium-cooled fast nuclear reactor (SFR) experiencing a severe accident, sodium (Na) aerosols can be formed after the ejection of the liquid Na coolant inside the containment. The surface interactions between these aerosols and different FP species have been investigated using ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package (VASP). In addition, an improved thermodynamic model has been proposed to treat DFT-VASP calculated energies to extrapolate them to temperatures and pressures of interest in our study. A combined experimental and theoretical chemistry study has been carried out to have both atomistic and macroscopic understanding of the chemical processes; the theoretical chemistry part of this approach is presented in this paper. The Perdew, Burke, and Ernzerhof functional were applied in combination with Grimme’s van der Waals correction to compute exchange-correlational energy at 0 K. Seven different surface cleavages were studied of Ƴ-Na₂CO₃ phase (stable at 603.15 K), it was found that for defect-free surfaces, the (001) facet is the most stable. Furthermore, calculations were performed to study surface defects and reconstructions on the ideal surface. All the studied surface defects were found to be less stable than the ideal surface. More than one adsorbate-ligand configurations were found to be stable confirming that FP vapors could be trapped on various adsorption sites. The calculated adsorption energies (Eads, eV) for the three most stable adsorption sites for I₂ are -1.33, -1.088, and -1.085. Moreover, the adsorption of the first molecule of I₂ changes the surface in a way which would favor stronger adsorption of a second molecule of I2 (Eads, eV = -1.261). For HI adsorption, the most favored reactions have the following Eads (eV) -1.982, -1.790, -1.683 implying that HI would be more reactive than I₂. In addition to FP species, adsorption of H₂O was also studied as the hydrated surface can have different reactivity than the bare surface. One thermodynamically favored site for H₂O adsorption was found with an Eads, eV of -0.754. Finally, the calculations of hydrated surfaces of Na₂CO₃ show that a layer of water adsorbed on the surface significantly reduces its affinity for iodine (Eads, eV = -1.066). According to the thermodynamic model built, the required partial pressure at 373 K to have adsorption of the first layer of iodine is 4.57×10⁻⁴ bar. The second layer will be adsorbed at partial pressures higher than 8.56×10⁻⁶ bar; a layer of water on the surface will increase these pressure almost ten folds to 3.71×10⁻³ bar. The surface interacts with elemental Cs with an Eads (eV) of -1.60, while interacts even strongly with CsI with an Eads (eV) of -2.39. More results on the interactions between Na₂CO₃ (001) and cesium-based FP will also be presented in this paper.Keywords: iodine uptake, sodium carbonate surface, sodium-cooled fast nuclear reactor, DFT calculations, fission products
Procedia PDF Downloads 1511199 Effect of Different Ground Motion Scaling Methods on Behavior of 40 Story RC Core Wall Building
Authors: Muhammad Usman, Munir Ahmed
Abstract:
The demand of high-rise buildings has grown fast during the past decades. The design of these buildings by using RC core wall have been widespread nowadays in many countries. The RC core wall (RCCW) buildings encompasses central core wall and boundary columns joined through post tension slab at different floor levels. The core wall often provides greater stiffness as compared to the collective stiffness of the boundary columns. Hence, the core wall dominantly resists lateral loading i.e. wind or earthquake load. Non-linear response history analysis (NLRHA) procedure is the finest seismic design procedure of the times for designing high-rise buildings. The modern design tools for nonlinear response history analysis and performance based design has provided more confidence to design these structures for high-rise buildings. NLRHA requires selection and scaling of ground motions to match design spectrum for site specific conditions. Designers use several techniques for scaling ground motion records (time series). Time domain and frequency domain scaling are most commonly used which comprises their own benefits and drawbacks. Due to lengthy process of NLRHA, application of only one technique is conceivable. To the best of author’s knowledge, no consensus on the best procedures for the selection and scaling of the ground motions is available in literature. This research aims to provide the finest ground motion scaling technique specifically for designing 40 story high-rise RCCW buildings. Seismic response of 40 story RCCW building is checked by applying both the frequency domain and time domain scaling. Variable sites are selected in three critical seismic zones of Pakistan. The results indicates that there is extensive variation in seismic response of building for these scaling. There is still a need to build a consensus on the subjected research by investigating variable sites and buildings heights.Keywords: 40-storied RC core wall building, nonlinear response history analysis, ground motions, time domain scaling, frequency domain scaling
Procedia PDF Downloads 1311198 Prosecution as Persecution: Exploring the Enduring Legacy of Judicial Harassment of Human Rights Defenders and Political Opponents in Zimbabwe, Cases from 2013-2016
Authors: Bellinda R. Chinowawa
Abstract:
As part of a wider strategy to stifle civil society, Governments routinely resort to judicial harassment through the use of civil and criminal to impugn the integrity of human rights defenders and that of perceived political opponents. This phenomenon is rife in militarised or autocratic regimes where there is no tolerance for dissenting voices. Zimbabwe, ostensibly a presidential republic founded on the values of transparency, equality, freedom, is characterised by brutal suppression of perceived political opponents and those who assert their basic human rights. This is done through a wide range of tactics including unlawful arrests and detention, torture and other cruel, inhuman degrading treatment and enforced disappearances. Professionals including, journalists and doctors are similarly not spared from state attack. For human rights defenders, the most widely used tool of repression is that of judicial harassment where the judicial system is used to persecute them. This can include the levying of criminal charges, civil lawsuits and unnecessary administrative proceedings. Charges preferred against range from petty offences such as criminal nuisance to more serious charges of terrorism and subverting a constitutional government. Additionally, government sponsored individuals and organisations file strategic lawsuits with pecuniary implications order to intimidate and silence critics and engender self-censorship. Some HRDs are convicted and sentenced to prison terms, despite not being criminals in a true sense. While others are acquitted judicial harassment diverts energy and resources away from their human rights work. Through a consideration of statistical data reported by human rights organisations and face to face interviews with a cross section of human rights defenders, the article will map the incidence of judicial harassment in Zimbabwe. The article will consider the multi-level sociological and contextual factors which influence the Government of Zimbabwe to have easy recourse to criminal law and the debilitating effect of these actions on HRDs. These factors include the breakdown of the rule of law resulting in state capture of the judiciary, the proven efficacy of judicial harassment from colonial times to date, and the lack of an adequate redress mechanism at international level. By mapping the use of the judiciary as a tool of repression, from the inception of modern day Zimbabwe to date, it is hoped that HRDs will realise that they are part of a greater community of activists throughout the ages and should emboldened in the realisation that it is an age old tactic used by fallen regimes which should not deter them from calling for accountability.Keywords: autocratic regime, colonial legacy, judicial harassment, human rights defenders
Procedia PDF Downloads 2331197 Stimulating Team Creativity: A Study on Creative-Oriented Integrated Design Companies in Taiwan
Authors: Yueh Hsiu Giffen Cheng, Teng Jung Wang
Abstract:
According to the study of British national advisory council on creative and cultural education(NACCCE, what the present and the future need awesome innovative and creative people from the perspective of commercial human resources. Therefore, we can know from above, creativity plays an important role in today’s enterprise indeed. Besides, many companies are aimed at developing team work as their main goal, so “creativity” and “teamwork” become more and more important factors to succeed and team creativity also turn into an important issue gradually. Then, the study takes in-depth interviews of design companies’ leaders and uses self-designed questionnaire regarding affecting team creativity to conduct cross-analysis. The results show that for those creative-oriented integrated design companies, their design strategies don’t begin until data collection and their scripts are usually the best way to inspire creativity. Besides, passing down a legacy of experiences are their common educational training. Most important of all, their organizational resources and leaders can assist all the team to learn and grow effectively and the good interaction between the leader and the member can also bring work flexibility and efficiency. In short, the leader’s expectation of members’ performance can cause them to encourage each other to progress. Moreover, the analysis of questionnaire indicates that members who are open-minded and leaders who have transformational leadership style can both help to establish a good team interaction. Furthermore, abundant resources and training system are also good approaches to establish a harmonious relationship. Finally, through integrating the outcomes of interviews and questionnaires, we can infer that those integrated design companies’ circumstances of design progress are mainly from their leaders’ guidance. In addition, the analysis of design problems are focused on their creative strategies and their scripts and sketches can also inspire their creativity. In sum, the feature of all team is influenced by 4 factors: leaders who have transformational leadership style, open-minded members, flexible working environment, resources and interactive relationship. Ultimately, the study hopes that the result above can apply to the design-related industries or help general companies elevate the team creativity.Keywords: creativity, team creativity, integrated design companies, design process
Procedia PDF Downloads 356