Search results for: reduced modeling
514 Developing Three-Dimensional Digital Image Correlation Method to Detect the Crack Variation at the Joint of Weld Steel Plate
Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung
Abstract:
The purposes of hydraulic gate are to maintain the functions of storing and draining water. It bears long-term hydraulic pressure and earthquake force and is very important for reservoir and waterpower plant. The high tensile strength of steel plate is used as constructional material of hydraulic gate. The cracks and rusts, induced by the defects of material, bad construction and seismic excitation and under water respectively, thus, the mechanics phenomena of gate with crack are probing into the cause of stress concentration, induced high crack increase rate, affect the safety and usage of hydroelectric power plant. Stress distribution analysis is a very important and essential surveying technique to analyze bi-material and singular point problems. The finite difference infinitely small element method has been demonstrated, suitable for analyzing the buckling phenomena of welding seam and steel plate with crack. Especially, this method can easily analyze the singularity of kink crack. Nevertheless, the construction form and deformation shape of some gates are three-dimensional system. Therefore, the three-dimensional Digital Image Correlation (DIC) has been developed and applied to analyze the strain variation of steel plate with crack at weld joint. The proposed Digital image correlation (DIC) technique is an only non-contact method for measuring the variation of test object. According to rapid development of digital camera, the cost of this digital image correlation technique has been reduced. Otherwise, this DIC method provides with the advantages of widely practical application of indoor test and field test without the restriction on the size of test object. Thus, the research purpose of this research is to develop and apply this technique to monitor mechanics crack variations of weld steel hydraulic gate and its conformation under action of loading. The imagines can be picked from real time monitoring process to analyze the strain change of each loading stage. The proposed 3-Dimensional digital image correlation method, developed in the study, is applied to analyze the post-buckling phenomenon and buckling tendency of welded steel plate with crack. Then, the stress intensity of 3-dimensional analysis of different materials and enhanced materials in steel plate has been analyzed in this paper. The test results show that this proposed three-dimensional DIC method can precisely detect the crack variation of welded steel plate under different loading stages. Especially, this proposed DIC method can detect and identify the crack position and the other flaws of the welded steel plate that the traditional test methods hardly detect these kind phenomena. Therefore, this proposed three-dimensional DIC method can apply to observe the mechanics phenomena of composite materials subjected to loading and operating.Keywords: welded steel plate, crack variation, three-dimensional digital image correlation (DIC), crack stel plate
Procedia PDF Downloads 520513 Modeling of Geotechnical Data Using GIS and Matlab for Eastern Ahmedabad City, Gujarat
Authors: Rahul Patel, S. P. Dave, M. V Shah
Abstract:
Ahmedabad is a rapidly growing city in western India that is experiencing significant urbanization and industrialization. With projections indicating that it will become a metropolitan city in the near future, various construction activities are taking place, making soil testing a crucial requirement before construction can commence. To achieve this, construction companies and contractors need to periodically conduct soil testing. This study focuses on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical Geo-database involves three essential steps. Firstly, borehole data is collected from reputable sources. Secondly, the accuracy and redundancy of the data are verified. Finally, the geotechnical information is standardized and organized for integration into the database. Once the Geo-database is complete, it is integrated with GIS. This integration allows users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. The GIS map generated by this study enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This approach highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers. The information generated by this study can be utilized by engineers to make informed decisions during construction activities. For instance, they can use the data to optimize foundation designs and improve site selection. In conclusion, the rapid growth experienced by Ahmedabad requires extensive construction activities, necessitating soil testing. This study focused on the process of creating a comprehensive geotechnical database integrated with GIS. The database was developed by collecting borehole data from reputable sources, verifying its accuracy and redundancy, and organizing the information for integration. The GIS map generated by this study is an efficient solution that offers greater accuracy and generates valuable information that can be used as input for correlation analysis. It also serves as a decision support tool for geotechnical engineers, allowing them to make informed decisions during construction activities.Keywords: arcGIS, borehole data, geographic information system (GIS), geo-database, interpolation, SPT N-value, soil classification, φ-value, bearing capacity
Procedia PDF Downloads 68512 Evaluating the Effectiveness of Mesotherapy and Topical 2% Minoxidil for Androgenic Alopecia in Females, Using Topical 2% Minoxidil as a Common Treatment
Authors: Hamed Delrobai Ghoochan Atigh
Abstract:
Androgenic alopecia (AGA) is a common form of hair loss, impacting approximately 50% of females, which leads to reduced self-esteem and quality of life. It causes progressive follicular miniaturization in genetically predisposed individuals. Mesotherapy -- a minimally invasive procedure, topical 2% minoxidil, and oral finasteride have emerged as popular treatment options in the realm of cosmetics. However, the efficacy of mesotherapy compared to other options remains unclear. This study aims to assess the effectiveness of mesotherapy when it is added to topical 2% minoxidil treatment on female androgenic alopecia. Mesotherapy, also known as intradermotherapy, is a technique that entails administering multiple intradermal injections of a carefully composed mixture of compounds in low doses, applied at various points in close proximity to or directly over the affected areas. This study involves a randomized controlled trial with 100 female participants diagnosed with androgenic alopecia. The subjects were randomly assigned to two groups: Group A used topical 2% minoxidil twice daily and took Finastride oral tablet. For Group B, 10 mesotherapy sessions were added to the prior treatment. The injections were administered every week in the first month of treatment, every two weeks in the second month, and after that the injections were applied monthly for four consecutive months. The response assessment was made at baseline, the 4th session, and finally after 6 months when the treatment was complete. Clinical photographs, 7-point Likert scale patient self-evaluation, and 7-point Likert scale assessment tool were used to measure the effectiveness of the treatment. During this evaluation, a significant and visible improvement in hair density and thickness was observed. The study demonstrated a significant increase in treatment efficacy in Group B compared to Group A post-treatment, with no adverse effects. Based on the findings, it appears that mesotherapy offers a significant improvement in female AGA over minoxidil. Hair loss was stopped in Group B after one month and improvement in density and thickness of hair was observed after the third month. The findings from this study provide valuable insights into the efficacy of mesotherapy in treating female androgenic alopecia. Our evaluation offers a detailed assessment of hair growth parameters, enabling a better understanding of the treatments' effectiveness. The potential of this promising technique is significantly enhanced when carried out in a medical facility, guided by appropriate indications and skillful execution. An interesting observation in our study is that in areas where the hair had turned grey, the newly regrown hair does not retain its original grey color; instead, it becomes darker. The results contribute to evidence-based decision-making in dermatological practice and offer different insights into the treatment of female pattern hair loss.Keywords: androgenic alopecia, female hair loss, mesotherapy, topical 2% minoxidil
Procedia PDF Downloads 102511 A Work-Individual-Family Inquiry on Mental Health and Family Responsibility of Dealers Employed in Macau Gaming Industry
Authors: Tak Mau Simon Chan
Abstract:
While there is growing reflection of the adverse impacts instigated by the flourishing gaming industry on the physical health and job satisfaction of those who work in Macau casinos, there is also a critical void in our understanding of the mental health of croupiers and how casino employment interacts with the family system. From a systemic approach, it would be most effective to examine the ‘dealer issues’ collectively and offer assistance to both the individual dealer and the family system of dealers. Therefore, with the use of a mixed method study design, the levels of anxiety, depression and sleeping quality of a sample of 1124 dealers who are working in Macau casinos have been measured in the present study, and 113 dealers have been interviewed about the impacts of casino employment on their family life. This study presents some very important findings. First, the quantitative study indicates that gender is a significant predictor of depression and anxiety levels, whilst lower income means less quality sleep. The Pearson’s correlation coefficients show that as the Zung Self-rating Anxiety Scale (ZSAS) scores increase, the Zung Self-rating Depression Scale (ZSDS) and Pittsburgh Sleep Quality Index (PSQI) scores will also simultaneously increase. Higher income, therefore, might partly explain for the reason why mothers choose to work in the gaming industry even with shift work involved and a stressful work environment. Second, the findings from the qualitative study show that aside from the positive impacts on family finances, the shift work and job stress to some degree negatively affect family responsibilities and relationships. There are resultant family issues, including missed family activities, and reduced parental care and guidance, marital intimacy, and communication with family members. Despite the mixed views on the gender role differences, the respondents generally agree that female dealers have more family and child-minding responsibilities at home, and thus it is more difficult for them to balance work and family. Consequently, they may be more vulnerable to stress at work. Thirdly, there are interrelationships between work and family, which are based on a systemic inquiry that incorporates work- individual- family. Poor physical and psychological health due to shift work or a harmful work environment could affect not just work performance, but also life at home. Therefore, a few practice points about 1) work-family conflicts in Macau; 2) families-in- transition in Macau; and 3) gender and class sensitivity in Macau; are provided for social workers and family practitioners who will greatly benefit these families, especially whose family members are working in the gaming industry in Macau. It is concluded that in addressing the cultural phenomenon of “dealer’s complex” in Macau, a systemic approach is recommended that addresses both personal psychological needs and family issue of dealers.Keywords: family, work stress, mental health, Macau, dealers, gaming industry
Procedia PDF Downloads 304510 Solar Liquid Desiccant Regenerator for Two Stage KCOOH Based Fresh Air Dehumidifier
Authors: M. V. Rane, Tareke Tekia
Abstract:
Liquid desiccant based fresh air dehumidifiers can be gainfully deployed for air-conditioning, agro-produce drying and in many industrial processes. Regeneration of liquid desiccant can be done using direct firing, high temperature waste heat or solar energy. Solar energy is clean and available in abundance; however, it is costly to collect. A two stage liquid desiccant fresh air dehumidification system can offer Coefficient of Performance (COP), in the range of 1.6 to 2 for comfort air conditioning applications. High COP helps reduce the size and cost of collectors required. Performance tests on high temperature regenerator of a two stage liquid desiccant fresh air dehumidifier coupled with seasonally tracked flat plate like solar collector will be presented in this paper. The two stage fresh air dehumidifier has four major components: High Temperature Regenerator (HTR), Low Temperature Regenerator (LTR), High and Low Temperature Solution Heat Exchangers and Fresh Air Dehumidifier (FAD). This open system can operate at near atmospheric pressure in all the components. These systems can be simple, maintenance-free and scalable. Environmentally benign, non-corrosive, moderately priced Potassium Formate, KCOOH, is used as a liquid desiccant. Typical KCOOH concentration in the system is expected to vary between 65 and 75%. Dilute liquid desiccant at 65% concentration exiting the fresh air dehumidifier will be pumped and preheated in solution heat exchangers before entering the high temperature solar regenerator. In the solar collector, solution will be regenerated to intermediate concentration of 70%. Steam and saturated solution exiting the solar collector array will be separated. Steam at near atmospheric pressure will then be used to regenerate the intermediate concentration solution up to a concentration of 75% in a low temperature regenerator where moisture vaporized be released in to atmosphere. Condensed steam can be used as potable water after adding a pinch of salt and some nutrient. Warm concentrated liquid desiccant will be routed to solution heat exchanger to recycle its heat to preheat the weak liquid desiccant solution. Evacuated glass tube based seasonally tracked solar collector is used for regeneration of liquid desiccant at high temperature. Temperature of regeneration for KCOOH is 133°C at 70% concentration. The medium temperature collector was designed for temperature range of 100 to 150°C. Double wall polycarbonate top cover helps reduce top losses. Absorber integrated heat storage helps stabilize the temperature of liquid desiccant exiting the collectors during intermittent cloudy conditions, and extends the operation of the system by couple of hours beyond the sunshine hours. This solar collector is light in weight, 12 kg/m2 without absorber integrated heat storage material, and 27 kg/m2 with heat storage material. Cost of the collector is estimated to be 10,000 INR/m2. Theoretical modeling of the collector has shown that the optical efficiency is 62%. Performance test of regeneration of KCOOH will be reported.Keywords: solar, liquid desiccant, dehumidification, air conditioning, regeneration
Procedia PDF Downloads 348509 Engineered Control of Bacterial Cell-to-Cell Signaling Using Cyclodextrin
Authors: Yuriko Takayama, Norihiro Kato
Abstract:
Quorum sensing (QS) is a cell-to-cell communication system in bacteria to regulate expression of target genes. In gram-negative bacteria, activation on QS is controlled by a concentration increase of N-acylhomoserine lactone (AHL), which can diffuse in and out of the cell. Effective control of QS is expected to avoid virulence factor production in infectious pathogens, biofilm formation, and antibiotic production because various cell functions in gram-negative bacteria are controlled by AHL-mediated QS. In this research, we applied cyclodextrins (CDs) as artificial hosts for the AHL signal to reduce the AHL concentration in the culture broth below its threshold for QS activation. The AHL-receptor complex induced under the high AHL concentration activates transcription of the QS-target gene. Accordingly, artificial reduction of the AHL concentration is one of the effective strategies to inhibit the QS. A hydrophobic cavity of the CD can interact with the acyl-chain of the AHL due to hydrophobic interaction in aqueous media. We studied N-hexanoylhomoserine lactone (C6HSL)-mediated QS in Serratia marcescens; accumulation of C6HSL is responsible for regulation of the expression of pig cluster. Inhibitory effects of added CDs on QS were demonstrated by determination of prodigiosin amount inside cells after reaching stationary phase, because production of prodigiosin depends on the C6HSL-mediated QS. By adding approximately 6 wt% hydroxypropyl-β-CD (HP-β-CD) in Luria-Bertani (LB) medium prior to inoculation of S. maecescens AS-1, the intracellularly accumulated prodigiosin was drastically reduced to 7-10%, which was determined after the extraction of prodigiosin in acidified ethanol. The AHL retention ability of HP-β-CD was also demonstrated by Chromobacterium violacuem CV026 bioassay. The CV026 strain is an AHL-synthase defective mutant that activates QS solely by adding AHLs from outside of cells. A purple pigment violacein is induced by activation of the AHL-mediated QS. We demonstrated that the violacein production was effectively suppressed when the C6HSL standard solution was spotted on a LB agar plate dispersing CV026 cells and HP-β-CD. Physico-chemical analysis was performed to study the affinity between the immobilized CD and added C6HSL using a quartz crystal microbalance (QCM) sensor. The COOH-terminated self-assembled monolayer was prepared on a gold electrode of 27-MHz AT-cut quartz crystal. Mono(6-deoxy-6-N, N-diethylamino)-β-CD was immobilized on the electrode using water-soluble carbodiimide. The C6HSL interaction with the β-CD cavity was studied by injecting the C6HSL solution to a cup-type sensor cell filled with buffer solution. A decrement of resonant frequency (ΔFs) clearly showed the effective C6HSL complexation with immobilized β-CD and its stability constant for MBP-SpnR-C6HSL complex was on the order of 102 M-1. The CD has high potential for engineered control of QS because it is safe for human use.Keywords: acylhomoserine lactone, cyclodextrin, intracellular signaling, quorum sensing
Procedia PDF Downloads 239508 Interactivity as a Predictor of Intent to Revisit Sports Apps
Authors: Young Ik Suh, Tywan G. Martin
Abstract:
Sports apps in a smartphone provide up-to-date information and fast and convenient access to live games. The market of sports apps has emerged as the second fastest growing app category worldwide. Further, many sports fans use their smartphones to know the schedule of sporting events, players’ position and bios, videos and highlights. In recent years, a growing number of scholars and practitioners alike have emphasized the importance of interactivity with sports apps, hypothesizing that interactivity plays a significant role in enticing sports apps users and that it is a key component in measuring the success of sports apps. Interactivity in sports apps focuses primarily on two functions: (1) two-way communication and (2) active user control, neither of which have been applicable through traditional mass media and communication technologies. Therefore, the purpose of this study is to examine whether the interactivity function on sports apps leads to positive outcomes such as intent to revisit. More specifically, this study investigates how three major functions of interactivity (i.e., two-way communication, active user control, and real-time information) influence the attitude of sports apps users and their intent to revisit the sports apps. The following hypothesis is proposed; interactivity functions will be positively associated with both attitudes toward sports apps and intent to revisit sports apps. The survey questionnaire includes four parts: (1) an interactivity scale, (2) an attitude scale, (3) a behavioral intention scale, and (4) demographic questions. Data are to be collected from ESPN apps users. To examine the relationships among the observed and latent variables and determine the reliability and validity of constructs, confirmatory factor analysis (CFA) is conducted. Structural equation modeling (SEM) is utilized to test hypothesized relationships among constructs. Additionally, this study compares the proposed interactivity model with a rival model to identify the role of attitude as a mediating factor. The findings of the current sports apps study provide several theoretical and practical contributions and implications by extending the research and literature associated with the important role of interactivity functions in sports apps and sports media consumption behavior. Specifically, this study may improve the theoretical understandings of whether the interactivity functions influence user attitudes and intent to revisit sports apps. Additionally, this study identifies which dimensions of interactivity are most important to sports apps users. From practitioners’ perspectives, this findings of this study provide significant implications. More entrepreneurs and investors in the sport industry need to recognize that high-resolution photos, live streams, and up-to-date stats are in the sports app, right at sports fans fingertips. The result will imply that sport practitioners may need to develop sports mobile apps that offer greater interactivity functions to attract sport fans.Keywords: interactivity, two-way communication, active user control, real time information, sports apps, attitude, intent to revisit
Procedia PDF Downloads 147507 Continuous and Discontinuos Modeling of Wellbore Instability in Anisotropic Rocks
Authors: C. Deangeli, P. Obentaku Obenebot, O. Omwanghe
Abstract:
The study focuses on the analysis of wellbore instability in rock masses affected by weakness planes. The occurrence of failure in such a type of rocks can occur in the rock matrix and/ or along the weakness planes, in relation to the mud weight gradient. In this case the simple Kirsch solution coupled with a failure criterion cannot supply a suitable scenario for borehole instabilities. Two different numerical approaches have been used in order to investigate the onset of local failure at the wall of a borehole. For each type of approach the influence of the inclination of weakness planes has been investigates, by considering joint sets at 0°, 35° and 90° to the horizontal. The first set of models have been carried out with FLAC 2D (Fast Lagrangian Analysis of Continua) by considering the rock material as a continuous medium, with a Mohr Coulomb criterion for the rock matrix and using the ubiquitous joint model for accounting for the presence of the weakness planes. In this model yield may occur in either the solid or along the weak plane, or both, depending on the stress state, the orientation of the weak plane and the material properties of the solid and weak plane. The second set of models have been performed with PFC2D (Particle Flow code). This code is based on the Discrete Element Method and considers the rock material as an assembly of grains bonded by cement-like materials, and pore spaces. The presence of weakness planes is simulated by the degradation of the bonds between grains along given directions. In general the results of the two approaches are in agreement. However the discrete approach seems to capture more complex phenomena related to local failure in the form of grain detachment at wall of the borehole. In fact the presence of weakness planes in the discontinuous medium leads to local instability along the weak planes also in conditions not predicted from the continuous solution. In general slip failure locations and directions do not follow the conventional wellbore breakout direction but depend upon the internal friction angle and the orientation of the bedding planes. When weakness plane is at 0° and 90° the behaviour are similar to that of a continuous rock material, but borehole instability is more severe when weakness planes are inclined at an angle between 0° and 90° to the horizontal. In conclusion, the results of the numerical simulations show that the prediction of local failure at the wall of the wellbore cannot disregard the presence of weakness planes and consequently the higher mud weight required for stability for any specific inclination of the joints. Despite the discrete approach can simulate smaller areas because of the large number of particles required for the generation of the rock material, however it seems to investigate more correctly the occurrence of failure at the miscroscale and eventually the propagation of the failed zone to a large portion of rock around the wellbore.Keywords: continuous- discontinuous, numerical modelling, weakness planes wellbore, FLAC 2D
Procedia PDF Downloads 499506 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall
Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono
Abstract:
Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall
Procedia PDF Downloads 191505 Comparative Effects of Resveratrol and Energy Restriction on Liver Fat Accumulation and Hepatic Fatty Acid Oxidation
Authors: Iñaki Milton-Laskibar, Leixuri Aguirre, Maria P. Portillo
Abstract:
Introduction: Energy restriction is an effective approach in preventing liver steatosis. However, due to social and economic reasons among others, compliance with this treatment protocol is often very poor, especially in the long term. Resveratrol, a natural polyphenolic compound that belongs to stilbene group, has been widely reported to imitate the effects of energy restriction. Objective: To analyze the effects of resveratrol under normoenergetic feeding conditions and under a mild energy restriction on liver fat accumulation and hepatic fatty acid oxidation. Methods: 36 male six-week-old rats were fed a high-fat high-sucrose diet for 6 weeks in order to induce steatosis. Then, rats were divided into four groups and fed a standard diet for 6 additional weeks: control group (C), resveratrol group (RSV, resveratrol 30 mg/kg/d), restricted group (R, 15 % energy restriction) and combined group (RR, 15 % energy restriction and resveratrol 30 mg/kg/d). Liver triacylglycerols (TG) and total cholesterol contents were measured by using commercial kits. Carnitine palmitoyl transferase 1a (CPT 1a) and citrate synthase (CS) activities were measured spectrophotometrically. TFAM (mitochondrial transcription factor A) and peroxisome proliferator-activator receptor alpha (PPARα) protein contents, as well as the ratio acetylated peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC1α)/Total PGC1α were analyzed by Western blot. Statistical analysis was performed by using one way ANOVA and Newman-Keuls as post-hoc test. Results: No differences were observed among the four groups regarding liver weight and cholesterol content, but the three treated groups showed reduced TG when compared to the control group, being the restricted groups the ones showing the lowest values (with no differences between them). Higher CPT 1a and CS activities were observed in the groups supplemented with resveratrol (RSV and RR), with no difference between them. The acetylated PGC1α /total PGC1α ratio was lower in the treated groups (RSV, R and RR) than in the control group, with no differences among them. As far as TFAM protein expression is concerned, only the RR group reached a higher value. Finally, no changes were observed in PPARα protein expression. Conclusions: Resveratrol administration is an effective intervention for liver triacylglycerol content reduction, but a mild energy restriction is even more effective. The mechanisms of action of these two strategies are different. Thus resveratrol, but not energy restriction, seems to act by increasing fatty acid oxidation, although mitochondriogenesis seems not to be induced. When both treatments (resveratrol administration and a mild energy restriction) were combined, no additive or synergic effects were appreciated. Acknowledgements: MINECO-FEDER (AGL2015-65719-R), Basque Government (IT-572-13), University of the Basque Country (ELDUNANOTEK UFI11/32), Institut of Health Carlos III (CIBERobn). Iñaki Milton is a fellowship from the Basque Government.Keywords: energy restriction, fat, liver, oxidation, resveratrol
Procedia PDF Downloads 211504 Enhanced Physiological Response of Blood Pressure and Improved Performance in Successive Divided Attention Test Seen with Classical Instrumental Background Music Compared to Controls
Authors: Shantala Herlekar
Abstract:
Introduction: Entrainment effect of music on cardiovascular parameters is well established. Music is being used in the background by medical students while studying. However, does it really help them relax faster and concentrate better? Objectives: This study was done to compare the effects of classical instrumental background music versus no music on blood pressure response over time and on successively performed divided attention test in Indian and Malaysian 1st-year medical students. Method: 60 Indian and 60 Malaysian first year medical students, with an equal number of girls and boys were randomized into two groups i.e music group and control group thus creating four subgroups. Three different forms of Symbol Digit Modality Test (to test concentration ability) were used as a pre-test, during music/control session and post-test. It was assessed using total, correct and error score. Simultaneously, multiple Blood Pressure recordings were taken as pre-test, during 1, 5, 15, 25 minutes during music/control (+SDMT) and post-test. The music group performed the test with classical instrumental background music while the control group performed it in silence. Results were analyzed using students paired t test. p value < 0.05 was taken as statistically significant. A drop in BP recording was indicative of relaxed state and a rise in BP with task performance was indicative of increased arousal. Results: In Symbol Digit Modality Test (SDMT) test, Music group showed significant better results for correct (p = 0.02) and total (p = 0.029) scores during post-test while errors reduced (p = 0.002). Indian music group showed decline in post-test error scores (p = 0.002). Malaysian music group performed significantly better in all categories. Blood pressure response was similar in music and control group with following variations, a drop in BP at 5minutes, being significant in music group (p < 0.001), a steep rise in values till 15minutes (corresponding to SDMT test) also being significant only in music group (p < 0.001) and the Systolic BP readings in controls during post-test were at lower levels compared to music group. On comparing the subgroups, not much difference was noticed in recordings of Indian student’s subgroups while all the paired-t test values in the Malaysian music group were significant. Conclusion: These recordings indicate an increased relaxed state with classical instrumental music and an increased arousal while performing a concentration task. Music used in our study was beneficial to students irrespective of their nationality and preference of music type. It can act as an “active coping” strategy and alleviate stress within a very short period of time, in our study within a span of 5minutes. When used in the background, during task performance, can increase arousal which helps the students perform better. Implications: Music can be used between lectures for a short time to relax the students and help them concentrate better for the subsequent classes, especially for late afternoon sessions.Keywords: blood pressure, classical instrumental background music, ethnicity, symbol digit modality test
Procedia PDF Downloads 141503 Food Insecurity and Other Correlates of Individual Components of Metabolic Syndrome in Women Living with HIV (WLWH) in the United States
Authors: E. Wairimu Mwangi, Daniel Sarpong
Abstract:
Background: Access to effective antiretroviral therapy in the United States has resulted in the rise in longevity in people living with HIV (PLHIV). Despite the progress, women living with HIV (WLWH) experience increasing rates of cardiometabolic disorders compared with their HIV-negative counterparts. Studies focusing on the predictors of metabolic disorders in this population have largely focused on the composite measure of metabolic syndrome (METs). This study seeks to identify the predictors of composite and individual METs factors in a nationally representative sample of WLWH. In particular, the study also examines the role of food security in predicting METs. Methods: The study comprised 1800 women, a subset of participants from the Women’s Interagency HIV Study (WIHS). The primary exposure variable, food security, was measured using the U.S. 10-item Household Food Security Survey Module. The outcome measures are the five metabolic syndrome indicators (elevated blood pressure [systolic BP > 130 mmHg and diastolic BP ≥ 85 mmHg], elevated fasting glucose [≥ 110 mg/dL], elevated fasting triglyceride [≥ 150 mg/dL], reduced HDL cholesterol [< 50 mg/dL], and waist circumference > 88 cm) and the composite measure - Metabolic Syndrome (METs) Status. Each metabolic syndrome indicator was coded one if yes and 0 otherwise. The values of the five indicators were summed, and participants with a total score of 3 or greater were classified as having metabolic syndrome. Participants classified as having metabolic syndrome were assigned a code of 1 and 0 otherwise for analysis. The covariates accounted for in this study fell into sociodemographic factors and behavioral and health characteristics. Results: The participants' mean (SD) age was 47.1 (9.1) years, with 71.4% Blacks and 10.9% Whites. About a third (33.1%) had less than a high school (HS) diploma, 60.4% were married, 32.8% were employed, and 53.7% were low-income. The prevalence of worst dietary diversity, low, moderate, and high food security were 24.1%, 26.6%, 17.0%, and 56.4%, respectively. The correlate profile of the five individual METs factors plus the composite measure of METs differ significantly, with METs based on HDL having the most correlates (Age, Education, Drinking Status, Low Income, Body Mass Index, and Health Perception). Additionally, metabolic syndrome based on waist circumference was the only metabolic factor where food security was significantly correlated (Food Security, Age, and Body Mass Index). Age was a significant predictor of all five individual METs factors plus the composite METs measure. Except for METs based on Fasting Triglycerides, body mass index (BMI) was a significant correlate of the various measures of metabolic syndrome. Conclusion: High-density Lipoprotein (HDL) cholesterol significantly correlated with most predictors. BMI was a significant predictor of all METs factors except Fasting Triglycerides. Food insecurity, the primary predictor, was only significantly associated with waist circumference.Keywords: blood pressure, food insecurity, fasting glucose, fasting triglyceride, high-density lipoprotein, metabolic syndrome, waist circumference, women living with HIV
Procedia PDF Downloads 58502 Arc Plasma Application for Solid Waste Processing
Authors: Vladimir Messerle, Alfred Mosse, Alexandr Ustimenko, Oleg Lavrichshev
Abstract:
Hygiene and sanitary study of typical medical-biological waste made in Kazakhstan, Russia, Belarus and other countries show that their risk to the environment is much higher than that of most chemical wastes. For example, toxicity of solid waste (SW) containing cytotoxic drugs and antibiotics is comparable to toxicity of radioactive waste of high and medium level activity. This report presents the results of the thermodynamic analysis of thermal processing of SW and experiments at the developed plasma unit for SW processing. Thermodynamic calculations showed that the maximum yield of the synthesis gas at plasma gasification of SW in air and steam mediums is achieved at a temperature of 1600K. At the air plasma gasification of SW high-calorific synthesis gas with a concentration of 82.4% (СO – 31.7%, H2 – 50.7%) can be obtained, and at the steam plasma gasification – with a concentration of 94.5% (СO – 33.6%, H2 – 60.9%). Specific heat of combustion of the synthesis gas produced by air gasification amounts to 14267 kJ/kg, while by steam gasification - 19414 kJ/kg. At the optimal temperature (1600 K), the specific power consumption for air gasification of SW constitutes 1.92 kWh/kg, while for steam gasification - 2.44 kWh/kg. Experimental study was carried out in a plasma reactor. This is device of periodic action. The arc plasma torch of 70 kW electric power is used for SW processing. Consumption of SW was 30 kg/h. Flow of plasma-forming air was 12 kg/h. Under the influence of air plasma flame weight average temperature in the chamber reaches 1800 K. Gaseous products are taken out of the reactor into the flue gas cooling unit, and the condensed products accumulate in the slag formation zone. The cooled gaseous products enter the gas purification unit, after which via gas sampling system is supplied to the analyzer. Ventilation system provides a negative pressure in the reactor up to 10 mm of water column. Condensed products of SW processing are removed from the reactor after its stopping. By the results of experiments on SW plasma gasification the reactor operating conditions were determined, the exhaust gas analysis was performed and the residual carbon content in the slag was determined. Gas analysis showed the following composition of the gas at the exit of gas purification unit, (vol.%): СO – 26.5, H2 – 44.6, N2–28.9. The total concentration of the syngas was 71.1%, which agreed well with the thermodynamic calculations. The discrepancy between experiment and calculation by the yield of the target syngas did not exceed 16%. Specific power consumption for SW gasification in the plasma reactor according to the results of experiments amounted to 2.25 kWh/kg of working substance. No harmful impurities were found in both gas and condensed products of SW plasma gasification. Comparison of experimental results and calculations showed good agreement. Acknowledgement—This work was supported by Ministry of Education and Science of the Republic of Kazakhstan and Ministry of Education and Science of the Russian Federation (Agreement on grant No. 14.607.21.0118, project RFMEF160715X0118).Keywords: coal, efficiency, ignition, numerical modeling, plasma-fuel system, plasma generator
Procedia PDF Downloads 250501 Treatment with Triton-X 100: An Enhancement Approach for Cardboard Bioprocessing
Authors: Ahlam Said Al Azkawi, Nallusamy Sivakumar, Saif Nasser Al Bahri
Abstract:
Diverse approaches and pathways are under development with the determination to develop cellulosic biofuels and other bio-products eventually at commercial scale in “bio-refineries”; however, the key challenge is mainly the high level of complexity in processing the feedstock which is complicated and energy consuming. To overcome the complications in utilizing the naturally occurring lignocellulose biomass, using waste paper as a feedstock for bio-production may solve the problem. Besides being abundant and cheap, bioprocessing of waste paper has evolved in response to the public concern from rising landfill cost from shrinking landfill capacity. Cardboard (CB) is one of the major components of municipal solid waste and one of the most important items to recycle. Although 50-70% of cardboard constitute is known to be cellulose and hemicellulose, the presence of lignin around them cause hydrophobic cross-link which physically obstructs the hydrolysis by rendering it resistant to enzymatic cleavage. Therefore, pretreatment is required to disrupt this resistance and to enhance the exposure of the targeted carbohydrates to the hydrolytic enzymes. Several pretreatment approaches have been explored, and the best ones would be those can influence cellulose conversion rates and hydrolytic enzyme performance with minimal or less cost and downstream processes. One of the promising strategies in this field is the application of surfactants, especially non-ionic surfactants. In this study, triton-X 100 was used as surfactants to treat cardboard prior enzymatic hydrolysis and compare it with acid treatment using 0.1% H2SO4. The effect of the surfactant enhancement was evaluated through its effect on hydrolysis rate in respect to time in addition to evaluating the structural changes and modification by scanning electron microscope (SEM) and X-ray diffraction (XRD) and through compositional analysis. Further work was performed to produce ethanol from CB treated with triton-X 100 via separate hydrolysis and fermentation (SHF) and simultaneous saccharification and fermentation (SSF). The hydrolysis studies have demonstrated enhancement in saccharification by 35%. After 72 h of hydrolysis, a saccharification rate of 98% was achieved from CB enhanced with triton-X 100, while only 89 of saccharification achieved from acid pre-treated CB. At 120 h, the saccharification % exceeded 100 as reducing sugars continued to increase with time. This enhancement was not supported by any significant changes in the cardboard content as the cellulose, hemicellulose and lignin content remained same after treatment, but obvious structural changes were observed through SEM images. The cellulose fibers were clearly exposed with very less debris and deposits compared to cardboard without triton-X 100. The XRD pattern has also revealed the ability of the surfactant in removing calcium carbonate, a filler found in waste paper known to have negative effect on enzymatic hydrolysis. The cellulose crystallinity without surfactant was 73.18% and reduced to 66.68% rendering it more amorphous and susceptible to enzymatic attack. Triton-X 100 has proved to effectively enhance CB hydrolysis and eventually had positive effect on the ethanol yield via SSF. Treating cardboard with only triton-X 100 was a sufficient treatment to enhance the enzymatic hydrolysis and ethanol production.Keywords: cardboard, enhancement, ethanol, hydrolysis, treatment, Triton-X 100
Procedia PDF Downloads 152500 An Unified Model for Longshore Sediment Transport Rate Estimation
Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza
Abstract:
Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone
Procedia PDF Downloads 387499 Quantitative Analysis Of Traffic Dynamics And Violation Patterns Triggered By Cruise Ship Tourism In Victoria, British Columbia
Authors: Muhammad Qasim, Laura Minet
Abstract:
Victoria (BC), Canada, is a major cruise ship destination, attracting over 600,000 tourists annually. Residents of the James Bay neighborhood, home to the Ogden Point cruise terminal, have expressed concerns about the impacts of cruise ship activity on local traffic, air pollution, and safety compliance. This study evaluates the effects of cruise ship-induced traffic in James Bay, focusing on traffic flow intensification, density surges, changes in traffic mix, and speeding violations. To achieve these objectives, traffic data was collected in James Bay during two key periods: May, before the peak cruise season, and August, during full cruise operations. Three Miovision cameras captured the vehicular traffic mix at strategic entry points, while nine traffic counters monitored traffic distribution and speeding violations across the network. Traffic data indicated an average volume of 308 vehicles per hour during peak cruise times in May, compared to 116 vehicles per hour when no ships were in port. Preliminary analyses revealed a significant intensification of traffic flow during cruise ship "hoteling hours," with a volume increase of approximately 10% per cruise ship arrival. A notable 86% surge in taxi presence was observed on days with three cruise ships in port, indicating a substantial shift in traffic composition, particularly near the cruise terminal. The number of tourist buses escalated from zero in May to 32 in August, significantly altering traffic dynamics within the neighborhood. The period between 8 pm and 11 pm saw the most significant increases in traffic volume, especially when three ships were docked. Higher vehicle volumes were associated with a rise in speed violations, although this pattern was inconsistent across all areas. Speeding violations were more frequent on roads with lower traffic density, while roads with higher traffic density experienced fewer violations, due to reduced opportunities for speeding in congested conditions. PTV VISUM software was utilized for fuzzy distribution analysis and to visualize traffic distribution across the study area, including an assessment of the Level of Service on major roads during periods before and during the cruise ship season. This analysis identified the areas most affected by cruise ship-induced traffic, providing a detailed understanding of the impact on specific parts of the transportation network. These findings underscore the significant influence of cruise ship activity on traffic dynamics in Victoria, BC, particularly during peak periods when multiple ships are in port. The study highlights the need for targeted traffic management strategies to mitigate the adverse effects of increased traffic flow, changes in traffic mix, and speed violations, thereby enhancing road safety in the James Bay neighborhood. Further research will focus on detailed emissions estimation to fully understand the environmental impacts of cruise ship activity in Victoria.Keywords: cruise ship tourism, air quality, traffic violations, transport dynamics, pollution
Procedia PDF Downloads 22498 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 240497 An Investigation of Tetraspanin Proteins’ Role in UPEC Infection
Authors: Fawzyah Albaldi
Abstract:
Urinary tract infections (UTIs) are the most prevalent of infectious diseases and > 80% are caused by uropathogenic E. coli (UPEC). Infection occurs following adhesion to urothelial plaques on bladder epithelial cells, whose major protein constituent are the uroplakins (UPs). Two of the four uroplakins (UPIa and UPIb) are members of the tetraspanin superfamily. The UPEC adhesin FimH is known to interact directly with UPIa. Tetraspanins are a diverse family of transmembrane proteins that generally act as “molecular organizers” by binding different proteins and lipids to form tetraspanin enriched microdomains (TEMs). Previous work by our group has shown that TEMs are involved in the adhesion of many pathogenic bacteria to human cells. Adhesion can be blocked by tetraspanin-derived synthetic peptides, suggesting that tetraspanins may be valuable drug targets. In this study, we investigate the role of tetraspanins in UPEC adherence to bladder epithelial cells. Human bladder cancer cell lines (T24, 5637, RT4), commonly used as in-vitro models to investigate UPEC infection, along with primary human bladder cells, were used in this project. The aim was to establish a model for UPEC adhesion/infection with the objective of evaluating the impact of tetraspanin-derived reagents on this process. Such reagents could reduce the progression of UTI, particularly in patients with indwelling catheters. Tetraspanin expression on the bladder cells was investigated by q-PCR and flow cytometry, with CD9 and CD81 generally highly expressed. Interestingly, despite these cell lines being used by other groups to investigate FimH antagonists, uroplakin proteins (UPIa, UPIb and UPIII) were poorly expressed at the cell surface, although some were present intracellularly. Attempts were made to differentiate the cell lines, to induce cell surface expression of these UPs, but these were largely unsuccessful. Pre-treatment of bladder epithelial cells with anti-CD9 monoclonal antibody significantly decreased UPEC infection, whilst anti-CD81 had no effects. A short (15aa) synthetic peptide corresponding to the large extracellular region (EC2) of CD9 also significantly reduced UPEC adherence. Furthermore, we demonstrated specific binding of that fluorescently tagged peptide to the cells. CD9 is known to associate with a number of heparan sulphate proteoglycans (HSPGs) that have also been implicated in bacterial adhesion. Here, we demonstrated that unfractionated heparin (UFH)and heparin analogs significantly inhibited UPEC adhesion to RT4 cells, as did pre-treatment of the cells with heparinases. Pre-treatment with chondroitin sulphate (CS) and chondroitinase also significantly decreased UPEC adherence to RT4 cells. This study may shed light on a common pathogenicity mechanism involving the organisation of HSPGs by tetraspanins. In summary, although we determined that the bladder cell lines were not suitable to investigate the role of uroplakins in UPEC adhesion, we demonstrated roles for CD9 and cell surface proteoglycans in this interaction. Agents that target these may be useful in treating/preventing UTIs.Keywords: UTIs, tspan, uroplakins, CD9
Procedia PDF Downloads 103496 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose
Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini
Abstract:
Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration
Procedia PDF Downloads 163495 Comparison of Cu Nanoparticle Formation and Properties with and without Surrounding Dielectric
Authors: P. Dubcek, B. Pivac, J. Dasovic, V. Janicki, S. Bernstorff
Abstract:
When grown only to nanometric sizes, metallic particles (e.g. Ag, Au and Cu) exhibit specific optical properties caused by the presence of plasmon band. The plasmon band represents collective oscillation of the conduction electrons, and causes a narrow band absorption of light in the visible range. When the nanoparticles are embedded in a dielectric, they also cause modifications of dielectrics optical properties. This can be fine-tuned by tuning the particle size. We investigated Cu nanoparticle growth with and without surrounding dielectric (SiO2 capping layer). The morphology and crystallinity were investigated by GISAXS and GIWAXS, respectively. Samples were produced by high vacuum thermal evaporation of Cu onto monocrystalline silicon substrate held at room temperature, 100°C or 180°C. One series was in situ capped by 10nm SiO2 layer. Additionally, samples were annealed at different temperatures up to 550°C, also in high vacuum. The room temperature deposited samples annealed at lower temperatures exhibit continuous film structure: strong oscillations in the GISAXS intensity are present especially in the capped samples. At higher temperatures enhanced surface dewetting and Cu nanoparticles (nanoislands) formation partially destroy the flatness of the interface. Therefore the particle type of scattering is enhanced, while the film fringes are depleted. However, capping layer hinders particle formation, and continuous film structure is preserved up to higher annealing temperatures (visible as strong and persistent fringes in GISAXS), compared to the non- capped samples. According to GISAXS, lateral particle sizes are reduced at higher temperatures, while particle height is increasing. This is ascribed to close packing of the formed particles at lower temperatures, and GISAXS deduced sizes are partially the result of the particle agglomerate dimensions. Lateral maxima in GISAXS are an indication of good positional correlation, and the particle to particle distance is increased as the particles grow with temperature elevation. This coordination is much stronger in the capped and lower temperature deposited samples. The dewetting is much more vigorous in the non-capped sample, and since nanoparticles are formed in a range of sizes, correlation is receding both with deposition and annealing temperature. Surface topology was checked by atomic force microscopy (AFM). Capped sample's surfaces were smoother and lateral size of the surface features were larger compared to the non-capped samples. Altogether, AFM results suggest somewhat larger particles and wider size distribution, and this can be attributed to the difference in probe size. Finally, the plasmonic effect was monitored by UV-Vis reflectance spectroscopy, and relative weak plasmonic effect could be explained by uncomplete dewetting or partial interconnection of the formed particles.Keywords: coper, GISAXS, nanoparticles, plasmonics
Procedia PDF Downloads 123494 Enhancing Project Management Performance in Prefabricated Building Construction under Uncertainty: A Comprehensive Approach
Authors: Niyongabo Elyse
Abstract:
Prefabricated building construction is a pioneering approach that combines design, production, and assembly to attain energy efficiency, environmental sustainability, and economic feasibility. Despite continuous development in the industry in China, the low technical maturity of standardized design, factory production, and construction assembly introduces uncertainties affecting prefabricated component production and on-site assembly processes. This research focuses on enhancing project management performance under uncertainty to help enterprises navigate these challenges and optimize project resources. The study introduces a perspective on how uncertain factors influence the implementation of prefabricated building construction projects. It proposes a theoretical model considering project process management ability, adaptability to uncertain environments, and collaboration ability of project participants. The impact of uncertain factors is demonstrated through case studies and quantitative analysis, revealing constraints on implementation time, cost, quality, and safety. To address uncertainties in prefabricated component production scheduling, a fuzzy model is presented, expressing processing times in interval values. The model utilizes a cooperative co-evolution evolution algorithm (CCEA) to optimize scheduling, demonstrated through a real case study showcasing reduced project duration and minimized effects of processing time disturbances. Additionally, the research addresses on-site assembly construction scheduling, considering the relationship between task processing times and assigned resources. A multi-objective model with fuzzy activity durations is proposed, employing a hybrid cooperative co-evolution evolution algorithm (HCCEA) to optimize project scheduling. Results from real case studies indicate improved project performance in terms of duration, cost, and resilience to processing time delays and resource changes. The study also introduces a multistage dynamic process control model, utilizing IoT technology for real-time monitoring during component production and construction assembly. This approach dynamically adjusts schedules when constraints arise, leading to enhanced project management performance, as demonstrated in a real prefabricated housing project. Key contributions include a fuzzy prefabricated components production scheduling model, a multi-objective multi-mode resource-constrained construction project scheduling model with fuzzy activity durations, a multi-stage dynamic process control model, and a cooperative co-evolution evolution algorithm. The integrated mathematical model addresses the complexity of prefabricated building construction project management, providing a theoretical foundation for practical decision-making in the field.Keywords: prefabricated construction, project management performance, uncertainty, fuzzy scheduling
Procedia PDF Downloads 50493 Coastal Modelling Studies for Jumeirah First Beach Stabilization
Authors: Zongyan Yang, Gagan K. Jena, Sankar B. Karanam, Noora M. A. Hokal
Abstract:
Jumeirah First beach, a segment of coastline of length 1.5 km, is one of the popular public beaches in Dubai, UAE. The stability of the beach has been affected by several coastal developmental projects, including The World, Island 2 and La Mer. A comprehensive stabilization scheme comprising of two composite groynes (of lengths 90 m and 125m), modification to the northern breakwater of Jumeirah Fishing Harbour and beach re-nourishment was implemented by Dubai Municipality in 2012. However, the performance of the implemented stabilization scheme has been compromised by La Mer project (built in 2016), which modified the wave climate at the Jumeirah First beach. The objective of the coastal modelling studies is to establish design basis for further beach stabilization scheme(s). Comprehensive coastal modelling studies had been conducted to establish the nearshore wave climate, equilibrium beach orientations and stable beach plan forms. Based on the outcomes of the modeling studies, recommendation had been made to extend the composite groynes to stabilize the Jumeirah First beach. Wave transformation was performed following an interpolation approach with wave transformation matrixes derived from simulations of a possible range of wave conditions in the region. The Dubai coastal wave model is developed with MIKE21 SW. The offshore wave conditions were determined from PERGOS wave data at 4 offshore locations with consideration of the spatial variation. The lateral boundary conditions corresponding to the offshore conditions, at Dubai/Abu Dhabi and Dubai Sharjah borders, were derived with application of LitDrift 1D wave transformation module. The Dubai coastal wave model was calibrated with wave records at monitoring stations operated by Dubai Municipality. The wave transformation matrix approach was validated with nearshore wave measurement at a Dubai Municipality monitoring station in the vicinity of the Jumeirah First beach. One typical year wave time series was transformed to 7 locations in front of the beach to count for the variation of wave conditions which are affected by adjacent and offshore developments. Equilibrium beach orientations were estimated with application of LitDrift by finding the beach orientations with null annual littoral transport at the 7 selected locations. The littoral transport calculation results were compared with beach erosion/accretion quantities estimated from the beach monitoring program (twice a year including bathymetric and topographical surveys). An innovative integral method was developed to outline the stable beach plan forms from the estimated equilibrium beach orientations, with predetermined minimum beach width. The optimal lengths for the composite groyne extensions were recommended based on the stable beach plan forms.Keywords: composite groyne, equilibrium beach orientation, stable beach plan form, wave transformation matrix
Procedia PDF Downloads 263492 Intermodal Strategies for Redistribution of Agrifood Products in the EU: The Case of Vegetable Supply Chain from Southeast of Spain
Authors: Juan C. Pérez-Mesa, Emilio Galdeano-Gómez, Jerónimo De Burgos-Jiménez, José F. Bienvenido-Bárcena, José F. Jiménez-Guerrero
Abstract:
Environmental cost and transport congestion on roads resulting from product distribution in Europe have to lead to the creation of various programs and studies seeking to reduce these negative impacts. In this regard, apart from other institutions, the European Commission (EC) has designed plans in recent years promoting a more sustainable transportation model in an attempt to ultimately shift traffic from the road to the sea by using intermodality to achieve a model rebalancing. This issue proves especially relevant in supply chains from peripheral areas of the continent, where the supply of certain agrifood products is high. In such cases, the most difficult challenge is managing perishable goods. This study focuses on new approaches that strengthen the modal shift, as well as the reduction of externalities. This problem is analyzed by attempting to promote intermodal system (truck and short sea shipping) for transport, taking as point of reference highly perishable products (vegetables) exported from southeast Spain, which is the leading supplier to Europe. Methodologically, this paper seeks to contribute to the literature by proposing a different and complementary approach to establish a comparison between intermodal and the “only road” alternative. For this purpose, the multicriteria decision is utilized in a p-median model (P-M) adapted to the transport of perishables and to a means of shipping selection problem, which must consider different variables: transit cost, including externalities, time, and frequency (including agile response time). This scheme avoids bias in decision-making processes. By observing the results, it can be seen that the influence of the externalities as drivers of the modal shift is reduced when transit time is introduced as a decision variable. These findings confirm that the general strategies, those of the EC, based on environmental benefits lose their capacity for implementation when they are applied to complex circumstances. In general, the different estimations reveal that, in the case of perishables, intermodality would be a secondary and viable option only for very specific destinations (for example, Hamburg and nearby locations, the area of influence of London, Paris, and the Netherlands). Based on this framework, the general outlook on this subject should be modified. Perhaps the government should promote specific business strategies based on new trends in the supply chain, not only on the reduction of externalities, and find new approaches that strengthen the modal shift. A possible option is to redefine ports, conceptualizing them as digitalized redistribution and coordination centers and not only as areas of cargo exchange.Keywords: environmental externalities, intermodal transport, perishable food, transit time
Procedia PDF Downloads 98491 Developing Geriatric Oral Health Network is a Public Health Necessity for Older Adults
Authors: Maryam Tabrizi, Shahrzad Aarup
Abstract:
Objectives- Understanding the close association between oral health and overall health for older adults at the right time and right place, a person, focus treatment through Project ECHO telementoring. Methodology- Data from monthly ECHO telementoring sessions were provided for three years. Sessions including case presentations, overall health conditions, considering medications, organ functions limitations, including the level of cognition. Contributions- Providing the specialist level of providing care to all elderly regardless of their location and other health conditions and decreasing oral health inequity by increasing workforce via Project ECHO telementoring program worldwide. By 2030, the number of adults in the USA over the age of 65 will increase more than 60% (approx.46 million) and over 22 million (30%) of 74 million older Americans will need specialized geriatrician care. In 2025, a national shortage of medical geriatricians will be close to 27,000. Most individuals 65 and older do not receive oral health care due to lack of access, availability, or affordability. One of the main reasons is a significant shortage of Oral Health (OH) education and resources for the elderly, particularly in rural areas. Poor OH is a social stigma, a thread to quality and safety of overall health of the elderly with physical and cognitive decline. Poor OH conditions may be costly and sometimes life-threatening. Non-traumatic dental-related emergency department use in Texas alone was over $250 M in 2016. Most elderly over the age of 65 present with at least one or multiple chronic diseases such as arthritis, diabetes, heart diseases, and chronic obstructive pulmonary disease (COPD) are at higher risk to develop gum (periodontal) disease, yet they are less likely to get dental care. In addition, most older adults take both prescription and over-the-counter drugs; according to scientific studies, many of these medications cause dry mouth. Reduced saliva flow due to aging and medications may increase the risk of cavities and other oral conditions. Most dental schools have already increased geriatrics OH in their educational curriculums, but the aging population growth worldwide is faster than growing geriatrics dentists. However, without the use of advanced technology and creating a network between specialists and primary care providers, it is impossible to increase the workforce, provide equitable oral health to the elderly. Project ECHO is a guided practice model that revolutionizes health education and increases the workforce to provide best-practice specialty care and reduce health disparities. Training oral health providers for utilizing the Project ECHO model is a logical response to the shortage and increases oral health access to the elderly. Project ECHO trains general dentists & hygienists to provide specialty care services. This means more elderly can get the care they need, in the right place, at the right time, with better treatment outcomes and reduces costs.Keywords: geriatric, oral health, project echo, chronic disease, oral health
Procedia PDF Downloads 174490 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study
Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi
Abstract:
Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine
Procedia PDF Downloads 115489 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 42488 Advanced Statistical Approaches for Identifying Predictors of Poor Blood Pressure Control: A Comprehensive Analysis Using Multivariable Logistic Regression and Generalized Estimating Equations (GEE)
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Effective management of hypertension remains a critical public health challenge, particularly among racially and ethnically diverse populations. This study employs sophisticated statistical models to rigorously investigate the predictors of poor blood pressure (BP) control, with a specific focus on demographic, socioeconomic, and clinical risk factors. Leveraging a large sample of 19,253 adults drawn from the National Health and Nutrition Examination Survey (NHANES) across three distinct time periods (2013-2014, 2015-2016, and 2017-2020), we applied multivariable logistic regression and generalized estimating equations (GEE) to account for the clustered structure of the data and potential within-subject correlations. Our multivariable models identified significant associations between poor BP control and several key predictors, including race/ethnicity, age, gender, body mass index (BMI), prevalent diabetes, and chronic kidney disease (CKD). Non-Hispanic Black individuals consistently exhibited higher odds of poor BP control across all periods (OR = 1.99; 95% CI: 1.69, 2.36 for the overall sample; OR = 2.33; 95% CI: 1.79, 3.02 for 2017-2020). Younger age groups demonstrated substantially lower odds of poor BP control compared to individuals aged 75 and older (OR = 0.15; 95% CI: 0.11, 0.20 for ages 18-44). Men also had a higher likelihood of poor BP control relative to women (OR = 1.55; 95% CI: 1.31, 1.82), while BMI ≥35 kg/m² (OR = 1.76; 95% CI: 1.40, 2.20) and the presence of diabetes (OR = 2.20; 95% CI: 1.80, 2.68) were associated with increased odds of poor BP management. Further analysis using GEE models, accounting for temporal correlations and repeated measures, confirmed the robustness of these findings. Notably, individuals with chronic kidney disease displayed markedly elevated odds of poor BP control (OR = 3.72; 95% CI: 3.09, 4.48), with significant differences across the survey periods. Additionally, higher education levels and better self-reported diet quality were associated with improved BP control. College graduates exhibited a reduced likelihood of poor BP control (OR = 0.64; 95% CI: 0.46, 0.89), particularly in the 2015-2016 period (OR = 0.48; 95% CI: 0.28, 0.84). Similarly, excellent dietary habits were associated with significantly lower odds of poor BP control (OR = 0.64; 95% CI: 0.44, 0.94), underscoring the importance of lifestyle factors in hypertension management. In conclusion, our findings provide compelling evidence of the complex interplay between demographic, clinical, and socioeconomic factors in predicting poor BP control. The application of advanced statistical techniques such as GEE enhances the reliability of these results by addressing the correlated nature of repeated observations. This study highlights the need for targeted interventions that consider racial/ethnic disparities, clinical comorbidities, and lifestyle modifications in improving BP control outcomes.Keywords: hypertension, blood pressure, NHANES, generalized estimating equations
Procedia PDF Downloads 11487 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions
Authors: Pirta Palola, Richard Bailey, Lisa Wedding
Abstract:
Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.Keywords: economics of biodiversity, environmental valuation, natural capital, value function
Procedia PDF Downloads 194486 Averting a Financial Crisis through Regulation, Including Legislation
Authors: Maria Krambia-Kapardis, Andreas Kapardis
Abstract:
The paper discusses regulatory and legislative measures implemented by various nations in an effort to avert another financial crisis. More specifically, to address the financial crisis, the European Commission followed the practice of other developed countries and implemented a European Economic Recovery Plan in an attempt to overhaul the regulatory and supervisory framework of the financial sector. In 2010 the Commission introduced the European Systemic Risk Board and in 2011 the European System of Financial Supervision. Some experts advocated that the type and extent of financial regulation introduced in the European crisis in the wake of the 2008 crisis has been excessive and counterproductive. In considering how different countries responded to the financial crisis, global regulators have shown a more focused commitment to combat industry misconduct and to pre-empt abusive behavior. Regulators have also increased funding and resources at their disposal; have increased regulatory fines, with an increasing trend towards action against individuals; and, finally, have focused on market abuse and market conduct issues. Financial regulation can be effected, first of all, through legislation. However, neither ex ante or ex post regulation is by itself effective in reducing systemic risk. Consequently, to avert a financial crisis, in their endeavor to achieve both economic efficiency and financial stability, governments need to balance the two approaches to financial regulation. Fiduciary duty is another means by which the behavior of actors in the financial world is constrained and, thus, regulated. Furthermore, fiduciary duties extend over and above other existing requirements set out by statute and/or common law and cover allegations of breach of fiduciary duty, negligence or fraud. Careful analysis of the etiology of the 2008 financial crisis demonstrates the great importance of corporate governance as a way of regulating boardroom behavior. In addition, the regulation of professions including accountants and auditors plays a crucial role as far as the financial management of companies is concerned. In the US, the Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board in order to protect investors from financial accounting fraud. In most countries around the world, however, accounting regulation consists of a legal framework, international standards, education, and licensure. Accounting regulation is necessary because of the information asymmetry and the conflict of interest that exists between managers and users of financial information. If a holistic approach is to be taken then one cannot ignore the regulation of legislators themselves which can take the form of hard or soft legislation. The science of averting a financial crisis is yet to be perfected and this, as shown by the preceding discussion, is unlikely to be achieved in the foreseeable future as ‘disaster myopia’ may be reduced but will not be eliminated. It is easier, of course, to be wise in hindsight and regulating unreasonably risky decisions and unethical or outright criminal behavior in the financial world remains major challenges for governments, corporations, and professions alike.Keywords: financial crisis, legislation, regulation, financial regulation
Procedia PDF Downloads 398485 Oil-price Volatility and Economic Prosperity in Nigeria: Empirical Evidence
Authors: Yohanna Panshak
Abstract:
The impact of macroeconomic instability on economic growth and prosperity has been at forefront in many discourses among researchers and policy makers and has generated a lot of controversies over the years. This has generated series of research efforts towards understanding the remote causes of this phenomenon; its nature, determinants and how it can be targeted and mitigated. While others have opined that the root cause of macroeconomic flux in Nigeria is attributed to Oil-Price volatility, others viewed the issue as resulting from some constellation of structural constraints both within and outside the shores of the country. Research works of scholars such as [Akpan (2009), Aliyu (2009), Olomola (2006), etc] argue that oil volatility can determine economic growth or has the potential of doing so. On the contrary, [Darby (1982), Cerralo (2005) etc] share the opinion that it can slow down growth. The earlier argument rest on the understanding that for a net balance of oil exporting economies, price upbeat directly increases real national income through higher export earnings, whereas, the latter allude to the case of net-oil importing countries (which experience price rises, increased input costs, reduced non-oil demand, low investment, fall in tax revenues and ultimately an increase in budget deficit which will further reduce welfare level). Therefore, assessing the precise impact of oil price volatility on virtually any economy is a function of whether it is an oil-exporting or importing nation. Research on oil price volatility and its outcome on the growth of the Nigerian economy are evolving and in a march towards resolving Nigeria’s macroeconomic instability as long as oil revenue still remain the mainstay and driver of socio-economic engineering. Recently, a major importer of Nigeria’s oil- United States made a historic breakthrough in more efficient source of energy for her economy with the capacity of serving significant part of the world. This undoubtedly suggests a threat to the exchange earnings of the country. The need to understand fluctuation in its major export commodity is critical. This paper leans on the Renaissance growth theory with greater focus on theoretical work of Lee (1998); a leading proponent of this school who makes a clear cut of difference between oil price changes and oil price volatility. Based on the above background, the research seeks to empirically examine the impact oil-price volatility on government expenditure using quarterly time series data spanning 1986:1 to 2014:4. Vector Auto Regression (VAR) econometric approach shall be used. The structural properties of the model shall be tested using Augmented Dickey-Fuller and Phillips-Perron. Relevant diagnostics tests of heteroscedasticity, serial correlation and normality shall also be carried out. Policy recommendation shall be offered on the empirical findings and believes it assist policy makers not only in Nigeria but the world-over.Keywords: oil-price, volatility, prosperity, budget, expenditure
Procedia PDF Downloads 270