Search results for: microscopic techniques
917 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique
Authors: Pavana Basavakumar, Devadas Bhat
Abstract:
Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes
Procedia PDF Downloads 323916 Mesocarbon Microbeads Modification of Stainless-Steel Current Collector to Stabilize Lithium Deposition and Improve the Electrochemical Performance of Anode Solid-State Lithium Hybrid Battery
Authors: Abebe Taye
Abstract:
The interest in enhancing the performance of all-solid-state batteries featuring lithium metal anodes as a potential alternative to traditional lithium-ion batteries has prompted exploration into new avenues. A promising strategy involves transforming lithium-ion batteries into hybrid configurations by integrating lithium-ion and lithium-metal solid-state components. This study is focused on achieving stable lithium deposition and advancing the electrochemical capabilities of solid-state lithium hybrid batteries with anodes by incorporating mesocarbon microbeads (MCMBs) blended with silver nanoparticles. To achieve this, mesocarbon microbeads (MCMBs) blended with silver nanoparticles are coated on stainless-steel current collectors. These samples undergo a battery of analyses employing diverse techniques. Surface morphology is studied through scanning electron microscopy (SEM). The electrochemical behavior of the coated samples is evaluated in both half-cell and full-cell setups utilizing an argyrodite-type sulfide electrolyte. The stability of MCMBs in the electrolyte is assessed using electrochemical impedance spectroscopy (EIS). Additional insights into the composition are gleaned through X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, and energy-dispersive X-ray spectroscopy (EDS). At an ultra-low N/P ratio of 0.26, stability is upheld for over 100 charge/discharge cycles in half-cells. When applied in a full-cell configuration, the hybrid anode preserves 60.1% of its capacity after 80 cycles at 0.3 C under a low N/P ratio of 0.45. In sharp contrast, the capacity retention of the cell using untreated MCMBs declines to 20.2% after a mere 60 cycles. The introduction of mesocarbon microbeads (MCMBs) combined with silver nanoparticles into the hybrid anode of solid-state lithium batteries substantially elevates their stability and electrochemical performance. This approach ensures consistent lithium deposition and removal, mitigating dendrite growth and the accumulation of inactive lithium. The findings from this investigation hold significant value in elevating the reversibility and energy density of lithium-ion batteries, thereby making noteworthy contributions to the advancement of more efficient energy storage systems.Keywords: MCMB, lithium metal, hybrid anode, silver nanoparticle, cycling stability
Procedia PDF Downloads 73915 Optimization of Temperature Coefficients for MEMS Based Piezoresistive Pressure Sensor
Authors: Vijay Kumar, Jaspreet Singh, Manoj Wadhwa
Abstract:
Piezo-resistive pressure sensors were one of the first developed micromechanical system (MEMS) devices and still display a significant growth prompted by the advancements in micromachining techniques and material technology. In MEMS based piezo-resistive pressure sensors, temperature can be considered as the main environmental condition which affects the system performance. The study of the thermal behavior of these sensors is essential to define the parameters that cause the output characteristics to drift. In this work, a study on the effects of temperature and doping concentration in a boron implanted piezoresistor for a silicon-based pressure sensor is discussed. We have optimized the temperature coefficient of resistance (TCR) and temperature coefficient of sensitivity (TCS) values to determine the effect of temperature drift on the sensor performance. To be more precise, in order to reduce the temperature drift, a high doping concentration is needed. And it is well known that the Wheatstone bridge in a pressure sensor is supplied with a constant voltage or a constant current input supply. With a constant voltage supply, the thermal drift can be compensated along with an external compensation circuit, whereas the thermal drift in the constant current supply can be directly compensated by the bridge itself. But it would be beneficial to also compensate the temperature coefficient of piezoresistors so as to further reduce the temperature drift. So, with a current supply, the TCS is dependent on both the TCπ and TCR. As TCπ is a negative quantity and TCR is a positive quantity, it is possible to choose an appropriate doping concentration at which both of them cancel each other. An exact cancellation of TCR and TCπ values is not readily attainable; therefore, an adjustable approach is generally used in practical applications. Thus, one goal of this work has been to better understand the origin of temperature drift in pressure sensor devices so that the temperature effects can be minimized or eliminated. This paper describes the optimum doping levels for the piezoresistors where the TCS of the pressure transducers will be zero due to the cancellation of TCR and TCπ values. Also, the fabrication and characterization of the pressure sensor are carried out. The optimized TCR value obtained for the fabricated die is 2300 ± 100ppm/ᵒC, for which the piezoresistors are implanted at a doping concentration of 5E13 ions/cm³ and the TCS value of -2100ppm/ᵒC is achieved. Therefore, the desired TCR and TCS value is achieved, which are approximately equal to each other, so the thermal effects are considerably reduced. Finally, we have calculated the effect of temperature and doping concentration on the output characteristics of the sensor. This study allows us to predict the sensor behavior against temperature and to minimize this effect by optimizing the doping concentration.Keywords: piezo-resistive, pressure sensor, doping concentration, TCR, TCS
Procedia PDF Downloads 177914 Knowledge Transfer to Builders in Improving Housing Resilience
Authors: Saima Shaikh, Andre Brown, Wallace Enegbuma
Abstract:
Earthquakes strike both developed and developing countries, causing tremendous damage and the loss of lives of millions of people, mainly due to the collapsing of buildings, particularly in poorer countries. Despite the socio-economic and technological restrictions, the poorer countries have adopted proven and established housing-strengthening techniques from affluent countries. Rural communities are aware of the earthquake-strengthening mechanisms for improving housing resilience, but owing to socio-economic and technological constraints, the seismic guidelines are rarely implemented, resulting in informal construction practice. Unregistered skilled laborers make substantial contributions to the informal construction sector, particularly in rural areas where knowledge is scarce. Laborers employ their local expertise in house construction; however, owing to a lack of seismic expertise in safe building procedures, the authorities' regulated seismic norms are not applied. From the perspective of seismic knowledge transformation in safe buildings practices, the study focuses on the feasibility of seismic guidelines implementation. The study firstly employs a literature review of massive-scale reconstruction after the 2005 earthquake in rural Pakistan. The 2005-earthquake damaged over 400,000 homes, killed 70,000 people and displaced 2.8 million people. The research subsequently corroborated the pragmatic approach using questionnaire field survey among the rural people in 2005-earthquake affected areas. Using the literature and the questionnaire survey, the research analyzing people's perspectives on technical acceptability, financial restrictions, and socioeconomic viability and examines the effectiveness of seismic knowledge transfer in safe buildings practices. The findings support the creation of a knowledge transfer framework in disaster mitigation and recovery planning, assisting rural communities and builders in minimising losses and improving response and recovery, as well as improving housing resilience and lowering vulnerabilities. Finally, certain conclusions are obtained in order to continue the resilience research. The research can be further applied in rural areas of developing countries having similar construction practices.Keywords: earthquakes, knowledge transfer, resilience, informal construction practices
Procedia PDF Downloads 173913 Hydrogen Sulfide Releasing Ibuprofen Derivative Can Protect Heart After Ischemia-Reperfusion
Authors: Virag Vass, Ilona Bereczki, Erzsebet Szabo, Nora Debreczeni, Aniko Borbas, Pal Herczegh, Arpad Tosaki
Abstract:
Hydrogen sulfide (H₂S) is a toxic gas, but it is produced by certain tissues in a small quantity. According to earlier studies, ibuprofen and H₂S has a protective effect against damaging heart tissue caused by ischemia-reperfusion. Recently, we have been investigating the effect of a new water-soluble H₂S releasing ibuprofen molecule administered after artificially generated ischemia-reperfusion on isolated rat hearts. The H₂S releasing property of the new ibuprofen derivative was investigated in vitro in medium derived from heart endothelial cell isolation at two concentrations. The ex vivo examinations were carried out on rat hearts. Rats were anesthetized with an intraperitoneal injection of ketamine, xylazine, and heparin. After thoracotomy, hearts were excised and placed into ice-cold perfusion buffer. Perfusion of hearts was conducted in Langendorff mode via the cannulated aorta. In our experiments, we studied the dose-effect of the H₂S releasing molecule in Langendorff-perfused hearts with the application of gradually increasing concentration of the compound (0- 20 µM). The H₂S releasing ibuprofen derivative was applied before the ischemia for 10 minutes. H₂S concentration was measured with an H₂S detecting electrochemical sensor from the coronary effluent solution. The 10 µM concentration was chosen for further experiments when the treatment with this solution was occurred after the ischemia. The release of H₂S is occurred by the hydrolyzing enzymes that are present in the heart endothelial cells. The protective effect of the new H₂S releasing ibuprofen molecule can be confirmed by the infarct sizes of hearts using the Triphenyl-tetrazolium chloride (TTC) staining method. Furthermore, we aimed to define the effect of the H₂S releasing ibuprofen derivative on autophagic and apoptotic processes in damaged hearts after investigating the molecular markers of these events by western blotting and immunohistochemistry techniques. Our further studies will include the examination of LC3I/II, p62, Beclin1, caspase-3, and other apoptotic molecules. We hope that confirming the protective effect of new H₂S releasing ibuprofen molecule will open a new possibility for the development of more effective cardioprotective agents with exerting fewer side effects. Acknowledgment: This study was supported by the grants of NKFIH- K-124719 and the European Union and the State of Hungary co- financed by the European Social Fund in the framework of GINOP- 2.3.2-15-2016-00043.Keywords: autophagy, hydrogen sulfide, ibuprofen, ischemia, reperfusion
Procedia PDF Downloads 139912 Effect of Upper Face Sheet Material on Flexural Strength of Polyurethane Foam Hybrid Sandwich Material
Authors: M. Atef Gabr, M. H. Abdel Latif, Ramadan El Gamsy
Abstract:
Sandwich panels comprise a thick, light-weight plastic foam such as polyurethane (PU) sandwiched between two relatively thin faces. One or both faces may be flat, lightly profiled or fully profiled. Until recently sandwich panel construction in Egypt has been widely used in cold-storage buildings, cold trucks, prefabricated buildings and insulation in construction. Recently new techniques are used in mass production of Sandwich Materials such as Reaction Injection Molding (RIM) and Vacuum bagging technique. However, in recent times their use has increased significantly due to their widespread structural applications in building systems. Structural sandwich panels generally used in Egypt comprise polyurethane foam core and thinner (0.42 mm) and high strength about 550 MPa (yield strength) flat steel faces bonded together using separate adhesives and By RIM technique. In this paper, we will use a new technique in sandwich panel preparation by using different face sheet materials in combination with polyurethane foam to form sandwich panel structures. Previously, PU Foam core with same thin 2 faces material was used, but in this work, we use different face materials and thicknesses for the upper face sheet such as Galvanized steel sheets (G.S),Aluminum sheets (Al),Fiberglass sheets (F.G) and Aluminum-Rubber composite sheets (Al/R) with polyurethane foam core 10 mm thickness and 45 Kg/m3 Density and Galvanized steel as lower face sheet. Using Aluminum-Rubber composite sheets as face sheet is considered a hybrid composite sandwich panel which is built by Hand-Layup technique by using PU glue as adhesive. This modification increases the benefits of the face sheet that will withstand different working environments with relatively small increase in its weight and will be useful in several applications. In this work, a 3-point bending test is used assistant professor to measure the most important factor in sandwich materials that is strength to weight ratio(STW) for different combinations of sandwich structures and make a comparison to study the effect of changing the face sheet material on the mechanical behavior of PU sandwich material. Also, the density of the different prepared sandwich materials will be measured to obtain the specific bending strength.Keywords: hybrid sandwich panel, mechanical behavior, PU foam, sandwich panel, 3-point bending, flexural strength
Procedia PDF Downloads 314911 Nanomaterials for Archaeological Stone Conservation: Re-Assembly of Archaeological Heavy Stones Using Epoxy Resin Modified with Clay Nanoparticles
Authors: Sayed Mansour, Mohammad Aldoasri, Nagib Elmarzugi, Nadia A. Al-Mouallimi
Abstract:
The archaeological large stone used in construction of ancient Pharaonic tombs, temples, obelisks and other sculptures, always subject to physicomechanical deterioration and destructive forces, leading to their partial or total broken. The task of reassembling this type of artifact represent a big challenge for the conservators. Recently, the researchers are turning to new technologies to improve the properties of traditional adhesive materials and techniques used in re-assembly of broken large stone. The epoxy resins are used extensively in stone conservation and re-assembly of broken stone because of their outstanding mechanical properties. The introduction of nanoparticles to polymeric adhesives at low percentages may lead to substantial improvements of their mechanical performances in structural joints and large objects. The aim of this study is to evaluate the effectiveness of clay nanoparticles in enhancing the performances of epoxy adhesives used in re-assembly of archaeological massive stone by adding proper amounts of those nanoparticles. The nanoparticles reinforced epoxy nanocomposite was prepared by direct melt mixing with a nanoparticles content of 3% (w/v), and then mould forming in the form of rectangular samples, and used as adhesive for experimental stone samples. Scanning electron microscopy (SEM) was employed to investigate the morphology of the prepared nanocomposites, and the distribution of nanoparticles inside the composites. The stability and efficiency of the prepared epoxy-nanocomposites and stone block assemblies with new formulated adhesives were tested by aging artificially the samples under different environmental conditions. The effect of incorporating clay nanoparticles on the mechanical properties of epoxy adhesives was evaluated comparatively before and after aging by measuring the tensile, compressive, and Elongation strength tests. The morphological studies revealed that the mixture process between epoxy and nanoparticles has succeeded with a relatively homogeneous morphology and good dispersion in low nano-particles loadings in epoxy matrix was obtained. The results show that the epoxy-clay nanocomposites exhibited superior tensile, compressive, and Elongation strength. Moreover, a marked improvement of the mechanical properties of stone joints increased in all states by adding nano-clay to epoxy in comparison with pure epoxy resin.Keywords: epoxy resins, nanocomposites, clay nanoparticles, re-assembly, archaeological massive stones, mechanical properties
Procedia PDF Downloads 112910 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking
Procedia PDF Downloads 400909 Nanorods Based Dielectrophoresis for Protein Concentration and Immunoassay
Authors: Zhen Cao, Yu Zhu, Junxue Fu
Abstract:
Immunoassay, i.e., antigen-antibody reaction, is crucial for disease diagnostics. To achieve the adequate signal of the antigen protein detection, a large amount of sample and long incubation time is needed. However, the amount of protein is usually small at the early stage, which makes it difficult to detect. Unlike cells and DNAs, no valid chemical method exists for protein amplification. Thus, an alternative way to improve the signal is through particle manipulation techniques to concentrate proteins, among which dielectrophoresis (DEP) is an effective one. DEP is a technique that concentrates particles to the designated region through a force created by the gradient in a non-uniform electric field. Since DEP force is proportional to the cube of particle size and square of electric field gradient, it is relatively easy to capture larger particles such as cells. For smaller ones like proteins, a super high gradient is then required. In this work, three-dimensional Ag/SiO2 nanorods arrays, fabricated by an easy physical vapor deposition technique called as oblique angle deposition, have been integrated with a DEP device and created the field gradient as high as of 2.6×10²⁴ V²/m³. The nanorods based DEP device is able to enrich bovine serum albumin (BSA) protein by 1800-fold and the rate has reached 180-fold/s when only applying 5 V electric potential. Based on the above nanorods integrated DEP platform, an immunoassay of mouse immunoglobulin G (IgG) proteins has been performed. Briefly, specific antibodies are immobilized onto nanorods, then IgG proteins are concentrated and captured, and finally, the signal from fluorescence-labelled antibodies are detected. The limit of detection (LoD) is measured as 275.3 fg/mL (~1.8 fM), which is a 20,000-fold enhancement compared with identical assays performed on blank glass plates. Further, prostate-specific antigen (PSA), which is a cancer biomarker for diagnosis of prostate cancer after radical prostatectomy, is also quantified with a LoD as low as 2.6 pg/mL. The time to signal saturation has been significantly reduced to one minute. In summary, together with an easy nanorod fabrication and integration method, this nanorods based DEP platform has demonstrated highly sensitive immunoassay performance and thus poses great potentials in applications for early point-of-care diagnostics.Keywords: dielectrophoresis, immunoassay, oblique angle deposition, protein concentration
Procedia PDF Downloads 102908 The Illegal Architecture of Apartheid in Palestine
Authors: Hala Barakat
Abstract:
Architecture plays a crucial role in the colonization and organization of spaces, as well as the preservation of cultures and history. As a result of 70 years of occupation, Palestinian land, culture, and history are endangered today. The government of Israel has used architecture to strangulate Palestinians out and seize their land. The occupation has managed to fragment the West Bank and cause sensible scars on the landscape by creating obstacles, barriers, watchtowers, checkpoints, walls, apartheid roads, border devices, and illegal settlements to unjustly claim land from its indigenous population. The apartheid architecture has divided the Palestinian social and urban fabric into pieces, similarly to the Bantustans. The architectural techniques and methods used by the occupation are evidence of prejudice, and while the illegal settlements remain to be condemned by the United Nations, little is being done to officially end this apartheid. Illegal settlements range in scale from individual units to established cities and house more than 60,000 Israeli settlers that immigrated from all over Europe and the United States. Often architecture by Israel is being directed towards expressing ideologies and serving as evidence of its political agenda. More than 78% of what was granted to Palestine after the development of the Green Line in 1948 is under Israeli occupation today. This project aims to map the illegal architecture as a criticism of governmental agendas in the West Bank and Historic Palestinian land. The paper will also discuss the resistance to the newly developed plan for the last Arab village in Jerusalem, Lifta. The illegal architecture has isolated Palestinians from each other and installed obstacles to control their movement. The architecture of occupation has no ethical or humane logic but rather entirely political, administrative, and it should not be left for the silenced architecture to tell the story. Architecture is not being used as a connecting device but rather a way to implement political injustice and spatial oppression. By narrating stories of the architecture of occupation, we can highlight the spatial injustice of the complex apartheid infrastructure. The Israeli government has managed to intoxicate architecture to serve as a divider between cultural groups, allowing the unlawful and unethical architecture to define its culture and values. As architects and designers, the roles we play in the development of illegal settlements must align with the spatial ethics we practice. Most importantly, our profession is not performing architecturally when we design a house with a particular roof color to ensure it would not be mistaken with a Palestinian house and be attacked accidentally.Keywords: apartheid, illegal architecture, occupation, politics
Procedia PDF Downloads 150907 The Use of Optical-Radar Remotely-Sensed Data for Characterizing Geomorphic, Structural and Hydrologic Features and Modeling Groundwater Prospective Zones in Arid Zones
Authors: Mohamed Abdelkareem
Abstract:
Remote sensing data contributed on predicting the prospective areas of water resources. Integration of microwave and multispectral data along with climatic, hydrologic, and geological data has been used here. In this article, Sentinel-2, Landsat-8 Operational Land Imager (OLI), Shuttle Radar Topography Mission (SRTM), Tropical Rainfall Measuring Mission (TRMM), and Advanced Land Observing Satellite (ALOS) Phased Array Type L‐band Synthetic Aperture Radar (PALSAR) data were utilized to identify the geological, hydrologic and structural features of Wadi Asyuti which represents a defunct tributary of the Nile basin, in the eastern Sahara. The image transformation of Sentinel-2 and Landsat-8 data allowed characterizing the different varieties of rock units. Integration of microwave remotely-sensed data and GIS techniques provided information on physical characteristics of catchments and rainfall zones that are of a crucial role for mapping groundwater prospective zones. A fused Landsat-8 OLI and ALOS/PALSAR data improved the structural elements that difficult to reveal using optical data. Lineament extraction and interpretation indicated that the area is clearly shaped by the NE-SW graben that is cut by NW-SE trend. Such structures allowed the accumulation of thick sediments in the downstream area. Processing of recent OLI data acquired on March 15, 2014, verified the flood potential maps and offered the opportunity to extract the extent of the flooding zone of the recent flash flood event (March 9, 2014), as well as revealed infiltration characteristics. Several layers including geology, slope, topography, drainage density, lineament density, soil characteristics, rainfall, and morphometric characteristics were combined after assigning a weight for each using a GIS-based knowledge-driven approach. The results revealed that the predicted groundwater potential zones (GPZs) can be arranged into six distinctive groups, depending on their probability for groundwater, namely very low, low, moderate, high very, high, and excellent. Field and well data validated the delineated zones.Keywords: GIS, remote sensing, groundwater, Egypt
Procedia PDF Downloads 96906 Satellite Interferometric Investigations of Subsidence Events Associated with Groundwater Extraction in Sao Paulo, Brazil
Authors: B. Mendonça, D. Sandwell
Abstract:
The Metropolitan Region of Sao Paulo (MRSP) has suffered from serious water scarcity. Consequently, the most convenient solution has been building wells to extract groundwater from local aquifers. However, it requires constant vigilance to prevent over extraction and future events that can pose serious threat to the population, such as subsidence. Radar imaging techniques (InSAR) have allowed continuous investigation of such phenomena. The analysis of data in the present study consists of 23 SAR images dated from October 2007 to March 2011, obtained by the ALOS-1 spacecraft. Data processing was made with the software GMTSAR, by using the InSAR technique to create pairs of interferograms with ground displacement during different time spans. First results show a correlation between the location of 102 wells registered in 2009 and signals of ground displacement equal or lower than -90 millimeters (mm) in the region. The longest time span interferogram obtained dates from October 2007 to March 2010. As a result, from that interferogram, it was possible to detect the average velocity of displacement in millimeters per year (mm/y), and which areas strong signals have persisted in the MRSP. Four specific areas with signals of subsidence of 28 mm/y to 40 mm/y were chosen to investigate the phenomenon: Guarulhos (Sao Paulo International Airport), the Greater Sao Paulo, Itaquera and Sao Caetano do Sul. The coverage area of the signals was between 0.6 km and 1.65 km of length. All areas are located above a sedimentary type of aquifer. Itaquera and Sao Caetano do Sul showed signals varying from 28 mm/y to 32 mm/y. On the other hand, the places most likely to be suffering from stronger subsidence are the ones in the Greater Sao Paulo and Guarulhos, right beside the International Airport of Sao Paulo. The rate of displacement observed in both regions goes from 35 mm/y to 40 mm/y. Previous investigations of the water use at the International Airport highlight the risks of excessive water extraction that was being done through 9 deep wells. Therefore, it is affirmed that subsidence events are likely to occur and to cause serious damage in the area. This study could show a situation that has not been explored with proper importance in the city, given its social and economic consequences. Since the data were only available until 2011, the question that remains is if the situation still persists. It could be reaffirmed, however, a scenario of risk at the International Airport of Sao Paulo that needs further investigation.Keywords: ground subsidence, Interferometric Satellite Aperture Radar (InSAR), metropolitan region of Sao Paulo, water extraction
Procedia PDF Downloads 352905 Reduction of Residual Stress by Variothermal Processing and Validation via Birefringence Measurement Technique on Injection Molded Polycarbonate Samples
Authors: Christoph Lohr, Hanna Wund, Peter Elsner, Kay André Weidenmann
Abstract:
Injection molding is one of the most commonly used techniques in the industrial polymer processing. In the conventional process of injection molding, the liquid polymer is injected into the cavity of the mold, where the polymer directly starts hardening at the cooled walls. To compensate the shrinkage, which is caused predominantly by the immediate cooling, holding pressure is applied. Through that whole process, residual stresses are produced by the temperature difference of the polymer melt and the injection mold and the relocation of the polymer chains, which were oriented by the high process pressures and injection speeds. These residual stresses often weaken or change the structural behavior of the parts or lead to deformation of components. One solution to reduce the residual stresses is the use of variothermal processing. Hereby the mold is heated – i.e. near/over the glass transition temperature of the polymer – the polymer is injected and before opening the mold and ejecting the part the mold is cooled. For the next cycle, the mold gets heated again and the procedure repeats. The rapid heating and cooling of the mold are realized indirectly by convection of heated and cooled liquid (here: water) which is pumped through fluid channels underneath the mold surface. In this paper, the influences of variothermal processing on the residual stresses are analyzed with samples in a larger scale (500 mm x 250 mm x 4 mm). In addition, the influence on functional elements, such as abrupt changes in wall thickness, bosses, and ribs, on the residual stress is examined. Therefore the polycarbonate samples are produced by variothermal and isothermal processing. The melt is injected into a heated mold, which has in our case a temperature varying between 70 °C and 160 °C. After the filling of the cavity, the closed mold is cooled down varying from 70 °C to 100 °C. The pressure and temperature inside the mold are monitored and evaluated with cavity sensors. The residual stresses of the produced samples are illustrated by birefringence where the effect on the refractive index on the polymer under stress is used. The colorful spectrum can be uncovered by placing the sample between a polarized light source and a second polarization filter. To show the achievement and processing effects on the reduction of residual stress the birefringence images of the isothermal and variothermal produced samples are compared and evaluated. In this comparison to the variothermal produced samples have a lower amount of maxima of each color spectrum than the isothermal produced samples, which concludes that the residual stress of the variothermal produced samples is lower.Keywords: birefringence, injection molding, polycarbonate, residual stress, variothermal processing
Procedia PDF Downloads 282904 Environmental Accounting: A Conceptual Study of Indian Context
Authors: Pradip Kumar Das
Abstract:
As the entire world continues its rapid move towards industrialization, it has seriously threatened mankind’s ability to maintain an ecological balance. Geographical and natural forces have a significant influence on the location of industries. Industrialization is the foundation stone of the development of any country, while the unplanned industrialization and discharge of waste by industries is the cause of environmental pollution. There is growing degree of awareness and concern globally among nations about environmental degradation or pollution. Environmental resources endowed by the gift of nature and not manmade are invaluable natural resources of a country like India. Any developmental activity is directly related to natural and environmental resources. Economic development without environmental considerations brings about environmental crises and damages the quality of life of present, as well as future generation. As corporate sectors in the global market, especially in India, are becoming anxious about environmental degradation, naturally more and more emphasis will be ascribed to how environment-friendly the outcomes are. Maintaining accounts of such environmental and natural resources in the country has become more urgent. Moreover, international awareness and acceptance of the importance of environmental issues has motivated the development of a branch of accounting called “Environmental Accounting”. Environmental accounting attempts to detect and focus the resources consumed and the costs rendered by an industrial unit to the environment. For the sustainable development of mankind, a healthy environment is indispensable. Gradually, therefore, in many countries including India, environment matters are being given top most priority. Accounting and disclosure of environmental matters have been increasingly manifesting as an important dimension of corporate accounting and reporting practices. But, as conventional accounting deals with mainly non-living things, the formulation of valuation, and measurement and accounting techniques for incorporating environment-related matters in the corporate financial statement sometimes creates problems for the accountant. In the light of this situation, the conceptual analysis of the study is concerned with the rationale of environmental accounting on the economy and society as a whole, and focuses the failures of the traditional accounting system. A modest attempt has been made to throw light on the environmental awareness in developing nations like India and discuss the problems associated with the implementation of environmental accounting. The conceptual study also reflects that despite different anomalies, environmental accounting is becoming an increasing important aspect of the accounting agenda within the corporate sector in India. Lastly, a conclusion, along with recommendations, has been given to overcome the situation.Keywords: environmental accounting, environmental degradation, environmental management, environmental resources
Procedia PDF Downloads 340903 The Effects of Molecular and Climatic Variability on the Occurrence of Aspergillus Species and Aflatoxin Production in Commercial Maize from Different Agro-climatic Regions in South Africa
Authors: Nji Queenta Ngum, Mwanza Mulunda
Abstract:
Introduction Most African research reports on the frequent aflatoxin contamination of various foodstuffs, with researchers rarely specifying which of the Aspergillus species are present in these commodities. Numerous research works provide evidence of the ability of fungi to grow, thrive, and interact with other crop species and focus on the fact that these processes are largely affected by climatic variables. South Africa is a water-stressed country with high spatio-temporal rainfall variability; moreover, temperatures have been projected to rise at a rate twice the global rate. This weather pattern change may lead to crop stress encouraging mold contamination with subsequent mycotoxin production. In this study, the biodiversity and distribution of Aspergillus species with their corresponding toxins in maize from six distinct maize producing regions with different weather patterns in South Africa were investigated. Materials And Methods By applying cultural and molecular methods, a total of 1028 maize samples from six distinct agro-climatic regions were examined for contamination by the Aspergillus species while the high performance liquid chromatography (HPLC) method was applied to analyse the level of contamination by aflatoxins. Results About 30% of the overall maize samples were contaminated by at least one Aspergillus species. Less than 30% (28.95%) of the 228 isolates subjected to the aflatoxigenic test was found to possess at least one of the aflatoxin biosynthetic genes. Furthermore, almost 20% were found to be contaminated with aflatoxins, with mean total aflatoxin concentration levels of 64.17 ppb. Amongst the contaminated samples, 59.02% had mean total aflatoxin concentration levels above the SA regulatory limit of 20ppb for animals and 10 for human consumption. Conclusion In this study, climate variables (rainfall reduction) were found to significantly (p<0.001) influence the occurrence of the Aspergillus species (especially Aspergillus fumigatus) and the production of aflatoxin in South Africa commercial maize by maize variety, year of cultivation as well as the agro-climatic region in which the maize is cultivated. This included, amongst others, a reduction in the average annual rainfall of the preceding year to about 21.27 mm, and, as opposed to other regions whose average maximum rainfall ranged between 37.24 – 44.1 mm, resulted in a significant increase in the aflatoxin contamination of maize.Keywords: aspergillus species, aflatoxins, diversity, drought, food safety, HPLC and PCR techniques
Procedia PDF Downloads 73902 Comparative Analysis of the Impact of Urbanization on Land Surface Temperature in the United Arab Emirates
Authors: A. O. Abulibdeh
Abstract:
The aim of this study is to investigate and compare the changes in the Land Surface Temperature (LST) as a function of urbanization, particularly land use/land cover changes, in three cities in the UAE, mainly Abu Dhabi, Dubai, and Al Ain. The scale of this assessment will be at the macro- and micro-levels. At the macro-level, a comparative assessment will take place to compare between the four cities in the UAE. At the micro-level, the study will compare between the effects of different land use/land cover on the LST. This will provide a clear and quantitative city-specific information related to the relationship between urbanization and local spatial intra-urban LST variation in three cities in the UAE. The main objectives of this study are 1) to investigate the development of LST on the macro- and micro-level between and in three cities in the UAE over two decades time period, 2) to examine the impact of different types of land use/land cover on the spatial distribution of LST. Because these three cities are facing harsh arid climate, it is hypothesized that (1) urbanization is affecting and connected to the spatial changes in LST; (2) different land use/land cover have different impact on the LST; and (3) changes in spatial configuration of land use and vegetation concentration over time would control urban microclimate on a city scale and control macroclimate on the country scale. This study will be carried out over a 20-year period (1996-2016) and throughout the whole year. The study will compare between two distinct periods with different thermal characteristics which are the cool/cold period from November to March and warm/hot period between April and October. The best practice research method for this topic is to use remote sensing data to target different aspects of natural and anthropogenic systems impacts. The project will follow classical remote sensing and mapping techniques to investigate the impact of urbanization, mainly changes in land use/land cover, on LST. The investigation in this study will be performed in two stages. Stage one remote sensing data will be used to investigate the impact of urbanization on LST on a macroclimate level where the LST and Urban Heat Island (UHI) will be compared in the three cities using data from the past two decades. Stage two will investigate the impact on microclimate scale by investigating the LST and UHI using a particular land use/land cover type. In both stages, an LST and urban land cover maps will be generated over the study area. The outcome of this study should represent an important contribution to recent urban climate studies, particularly in the UAE. Based on the aim and objectives of this study, the expected outcomes are as follow: i) to determine the increase or decrease of LST as a result of urbanization in these four cities, ii) to determine the effect of different land uses/land covers on increasing or decreasing the LST.Keywords: land use/land cover, global warming, land surface temperature, remote sensing
Procedia PDF Downloads 246901 Uncertainty Quantification of Fuel Compositions on Premixed Bio-Syngas Combustion at High-Pressure
Abstract:
Effect of fuel variabilities on premixed combustion of bio-syngas mixtures is of great importance in bio-syngas utilisation. The uncertainties of concentrations of fuel constituents such as H2, CO and CH4 may lead to unpredictable combustion performances, combustion instabilities and hot spots which may deteriorate and damage the combustion hardware. Numerical modelling and simulations can assist in understanding the behaviour of bio-syngas combustion with pre-defined species concentrations, while the evaluation of variabilities of concentrations is expensive. To be more specific, questions such as ‘what is the burning velocity of bio-syngas at specific equivalence ratio?’ have been answered either experimentally or numerically, while questions such as ‘what is the likelihood of burning velocity when precise concentrations of bio-syngas compositions are unknown, but the concentration ranges are pre-described?’ have not yet been answered. Uncertainty quantification (UQ) methods can be used to tackle such questions and assess the effects of fuel compositions. An efficient probabilistic UQ method based on Polynomial Chaos Expansion (PCE) techniques is employed in this study. The method relies on representing random variables (combustion performances) with orthogonal polynomials such as Legendre or Gaussian polynomials. The constructed PCE via Galerkin Projection provides easy access to global sensitivities such as main, joint and total Sobol indices. In this study, impacts of fuel compositions on combustion (adiabatic flame temperature and laminar flame speed) of bio-syngas fuel mixtures are presented invoking this PCE technique at several equivalence ratios. High-pressure effects on bio-syngas combustion instability are obtained using detailed chemical mechanism - the San Diego Mechanism. Guidance on reducing combustion instability from upstream biomass gasification process is provided by quantifying the significant contributions of composition variations to variance of physicochemical properties of bio-syngas combustion. It was found that flame speed is very sensitive to hydrogen variability in bio-syngas, and reducing hydrogen uncertainty from upstream biomass gasification processes can greatly reduce bio-syngas combustion instability. Variation of methane concentration, although thought to be important, has limited impacts on laminar flame instabilities especially for lean combustion. Further studies on the UQ of percentage concentration of hydrogen in bio-syngas can be conducted to guide the safer use of bio-syngas.Keywords: bio-syngas combustion, clean energy utilisation, fuel variability, PCE, targeted uncertainty reduction, uncertainty quantification
Procedia PDF Downloads 273900 Occupational Safety and Health in the Wake of Drones
Authors: Hoda Rahmani, Gary Weckman
Abstract:
The body of research examining the integration of drones into various industries is expanding rapidly. Despite progress made in addressing the cybersecurity concerns for commercial drones, knowledge deficits remain in determining potential occupational hazards and risks of drone use to employees’ well-being and health in the workplace. This creates difficulty in identifying key approaches to risk mitigation strategies and thus reflects the need for raising awareness among employers, safety professionals, and policymakers about workplace drone-related accidents. The purpose of this study is to investigate the prevalence of and possible risk factors for drone-related mishaps by comparing the application of drones in construction with manufacturing industries. The chief reason for considering these specific sectors is to ascertain whether there exists any significant difference between indoor and outdoor flights since most construction sites use drones outside and vice versa. Therefore, the current research seeks to examine the causes and patterns of workplace drone-related mishaps and suggest possible ergonomic interventions through data collection. Potential ergonomic practices to mitigate hazards associated with flying drones could include providing operators with professional pieces of training, conducting a risk analysis, and promoting the use of personal protective equipment. For the purpose of data analysis, two data mining techniques, the random forest and association rule mining algorithms, will be performed to find meaningful associations and trends in data as well as influential features that have an impact on the occurrence of drone-related accidents in construction and manufacturing sectors. In addition, Spearman’s correlation and chi-square tests will be used to measure the possible correlation between different variables. Indeed, by recognizing risks and hazards, occupational safety stakeholders will be able to pursue data-driven and evidence-based policy change with the aim of reducing drone mishaps, increasing productivity, creating a safer work environment, and extending human performance in safe and fulfilling ways. This research study was supported by the National Institute for Occupational Safety and Health through the Pilot Research Project Training Program of the University of Cincinnati Education and Research Center Grant #T42OH008432.Keywords: commercial drones, ergonomic interventions, occupational safety, pattern recognition
Procedia PDF Downloads 207899 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School
Authors: Martín Pratto Burgos
Abstract:
The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.Keywords: machine-learning, engineering, university, education, computational models
Procedia PDF Downloads 93898 Biodeterioration of Historic Parks of UK by Algae
Authors: Syeda Fatima Manzelat
Abstract:
This chapter investigates the biodeterioration of parks in the UK caused by lichens, focusing on Campbell Park and Great Linford Manor Park in Milton Keynes. The study first isolates and identifies potent biodeteriogens responsible for potential biodeterioration in these parks, enumerating and recording different classes and genera of lichens known for their biodeteriorative properties. It then examines the implications of lichens on biodeterioration at historic sites within these parks, considering impacts on historic structures, the environment, and associated health risks. Conservation strategies and preventive measures are discussed before concluding.Lichens, characterized by their symbiotic association between a fungus and an alga, thrive on various surfaces including building materials, soil, rock, wood, and trees. The fungal component provides structure and protection, while the algal partner performs photosynthesis. Lichens collected from the park sites, such as Xanthoria, Cladonia, and Arthonia, were observed affecting the historic walls, objects, and trees. Their biodeteriorative impacts were visible to the naked eye, contributing to aesthetic and structural damage. The study highlights the role of lichens as bioindicators of pollution, sensitive to changes in air quality. The presence and diversity of lichens provide insights into the air quality and pollution levels in the parks. However, lichens also pose health risks, with certain species causing respiratory issues, allergies, skin irritation, and other toxic effects in humans and animals. Conservation strategies discussed include regular monitoring, biological and chemical control methods, physical removal, and preventive cleaning. The study emphasizes the importance of a multifaceted, multidisciplinary approach to managing lichen-induced biodeterioration. Future management practices could involve advanced techniques such as eco-friendly biocides and self-cleaning materials to effectively control lichen growth and preserve historic structures. In conclusion, this chapter underscores the dual role of lichens as agents of biodeterioration and indicators of environmental quality. Comprehensive conservation management approaches, encompassing monitoring, targeted interventions, and advanced conservation methods, are essential for preserving the historic and natural integrity of parks like Campbell Park and Great Linford Manor Park.Keywords: biodeterioration, historic parks, algae, UK
Procedia PDF Downloads 30897 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments
Authors: Rohit Dey, Sailendra Karra
Abstract:
This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems
Procedia PDF Downloads 136896 Surgical Hip Dislocation of Femoroacetabular Impingement: Survivorship and Functional Outcomes at 10 Years
Authors: L. Hoade, O. O. Onafowokan, K. Anderson, G. E. Bartlett, E. D. Fern, M. R. Norton, R. G. Middleton
Abstract:
Aims: Femoroacetabular impingement (FAI) was first recognised as a potential driver for hip pain at the turn of the last millennium. While there is an increasing trend towards surgical management of FAI by arthroscopic means, open surgical hip dislocation and debridement (SHD) remains the Gold Standard of care in terms of reported outcome measures. (1) Long-term functional and survivorship outcomes of SHD as a treatment for FAI are yet to be sufficiently reported in the literature. This study sets out to help address this imbalance. Methods: We undertook a retrospective review of our institutional database for all patients who underwent SHD for FAI between January 2003 and December 2008. A total of 223 patients (241 hips) were identified and underwent a ten year review with a standardised radiograph and patient-reported outcome measures questionnaire. The primary outcome measure of interest was survivorship, defined as progression to total hip arthroplasty (THA). Negative predictive factors were analysed. Secondary outcome measures of interest were survivorship to further (non-arthroplasty) surgery, functional outcomes as reflected by patient reported outcome measure scores (PROMS) scores, and whether a learning curve could be identified. Results: The final cohort consisted of 131 females and 110 males, with a mean age of 34 years. There was an overall native hip joint survival rate of 85.4% at ten years. Those who underwent a THA were significantly older at initial surgery, had radiographic evidence of preoperative osteoarthritis and pre- and post-operative acetabular undercoverage. In those whom had not progressed to THA, the average Non-arthritic Hip Score and Oxford Hip Score at ten year follow-up were 72.3% and 36/48, respectively, and 84% still deemed their surgery worthwhile. A learning curve was found to exist that was predicated on case selection rather than surgical technique. Conclusion: This is only the second study to evaluate the long-term outcomes (beyond ten years) of SHD for FAI and the first outside the originating centre. Our results suggest that, with correct patient selection, this remains an operation with worthwhile outcomes at ten years. How the results of open surgery compared to those of arthroscopy remains to be answered. While these results precede the advent of collison software modelling tools, this data helps set a benchmark for future comparison of other techniques effectiveness at the ten year mark.Keywords: femoroacetabular impingement, hip pain, surgical hip dislocation, hip debridement
Procedia PDF Downloads 81895 Engage, Connect, Empower: Agile Approach in the University Students' Education
Authors: D. Bjelica, T. Slavinski, V. Vukimrovic, D. Pavlovic, D. Bodroza, V. Dabetic
Abstract:
Traditional methods and techniques used in higher education may be significantly persuasive on the university students' perception about quality of the teaching process. Students’ satisfaction with the university experience may be affected by chosen educational approaches. Contemporary project management trends recognize agile approaches' beneficial, so modern practice highlights their usage, especially in the IT industry. A key research question concerns the possibility of applying agile methods in youth education. As agile methodology pinpoint iteratively-incremental delivery of results, its employment could be remarkably fruitful in education. This paper demonstrates the agile concept's application in the university students’ education through the continuous delivery of student solutions. Therefore, based on the fundamental values and principles of the agile manifest, paper will analyze students' performance and learned lessons in their encounter with the agile environment. The research is based on qualitative and quantitative analysis that includes sprints, as preparation and realization of student tasks in shorter iterations. Consequently, the performance of student teams will be monitored through iterations, as well as the process of adaptive planning and realization. Grounded theory methodology has been used in this research, as so as descriptive statistics and Man Whitney and Kruskal Wallis test for group comparison. Developed constructs of the model will be showcase through qualitative research, then validated through a pilot survey, and eventually tested as a concept in the final survey. The paper highlights the variability of educational curricula based on university students' feedbacks, which will be collected at the end of every sprint and indicates to university students' satisfaction inconsistency according to approaches applied in education. Values delivered by the lecturers will also be continuously monitored; thus, it will be prioritizing in order to students' requests. Minimal viable product, as the early delivery of results, will be particularly emphasized in the implementation process. The paper offers both theoretical and practical implications. This research contains exceptional lessons that may be applicable by educational institutions in curriculum creation processes, or by lecturers in curriculum design and teaching. On the other hand, they can be beneficial regarding university students' satisfaction increscent in respect of teaching styles, gained knowledge, or even educational content.Keywords: academic performances, agile, high education, university students' satisfaction
Procedia PDF Downloads 128894 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River
Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko
Abstract:
Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling
Procedia PDF Downloads 257893 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept
Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani
Abstract:
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy
Procedia PDF Downloads 342892 Synthesis of High-Antifouling Ultrafiltration Polysulfone Membranes Incorporating Low Concentrations of Graphene Oxide
Authors: Abdulqader Alkhouzaam, Hazim Qiblawey, Majeda Khraisheh
Abstract:
Membrane treatment for desalination and wastewater treatment is one of the promising solutions to affordable clean water. It is a developing technology throughout the world and considered as the most effective and economical method available. However, the limitations of membranes’ mechanical and chemical properties restrict their industrial applications. Hence, developing novel membranes was the focus of most studies in the water treatment and desalination sector to find new materials that can improve the separation efficiency while reducing membrane fouling, which is the most important challenge in this field. Graphene oxide (GO) is one of the materials that have been recently investigated in the membrane water treatment sector. In this work, ultrafiltration polysulfone (PSF) membranes with high antifouling properties were synthesized by incorporating different loadings of GO. High-oxidation degree GO had been synthesized using a modified Hummers' method. The synthesized GO was characterized using different analytical techniques including elemental analysis, Fourier transform infrared spectroscopy - universal attenuated total reflectance sensor (FTIR-UATR), Raman spectroscopy, and CHNSO elemental analysis. CHNSO analysis showed a high oxidation degree of GO represented by its oxygen content (50 wt.%). Then, ultrafiltration PSF membranes incorporating GO were fabricated using the phase inversion technique. The prepared membranes were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and showed a clear effect of GO on PSF physical structure and morphology. The water contact angle of the membranes was measured and showed better hydrophilicity of GO membranes compared to pure PSF caused by the hydrophilic nature of GO. Separation properties of the prepared membranes were investigated using a cross-flow membrane system. Antifouling properties were studied using bovine serum albumin (BSA) and humic acid (HA) as model foulants. It has been found that GO-based membranes exhibit higher antifouling properties compared to pure PSF. When using BSA, the flux recovery ratio (FRR %) increased from 65.4 ± 0.9 % for pure PSF to 84.0 ± 1.0 % with a loading of 0.05 wt.% GO in PSF. When using HA as model foulant, FRR increased from 87.8 ± 0.6 % to 93.1 ± 1.1 % with 0.02 wt.% of GO in PSF. The pure water permeability (PWP) decreased with loadings of GO from 181.7 L.m⁻².h⁻¹.bar⁻¹ of pure PSF to 181.1, and 157.6 L.m⁻².h⁻¹.bar⁻¹ with 0.02 and 0.05 wt.% GO respectively. It can be concluded from the obtained results that incorporating low loading of GO could enhance the antifouling properties of PSF hence improving its lifetime and reuse.Keywords: antifouling properties, GO based membranes, hydrophilicity, polysulfone, ultrafiltration
Procedia PDF Downloads 142891 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System
Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb
Abstract:
The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model
Procedia PDF Downloads 62890 Investigating the Impact of Enterprise Resource Planning System and Supply Chain Operations on Competitive Advantage and Corporate Performance (Case Study: Mamot Company)
Authors: Mohammad Mahdi Mozaffari, Mehdi Ajalli, Delaram Jafargholi
Abstract:
The main purpose of this study is to investigate the impact of the system of ERP (Enterprise Resource Planning) and SCM (Supply Chain Management) on the competitive advantage and performance of Mamot Company. The methods for collecting information in this study are library studies and field research. A questionnaire was used to collect the data needed to determine the relationship between the variables of the research. This questionnaire contains 38 questions. The direction of the current research is applied. The statistical population of this study consists of managers and experts who are familiar with the SCM system and ERP. Number of statistical society is 210. The sampling method is simple in this research. The sample size is 136 people. Also, among the distributed questionnaires, Reliability of the Cronbach's Alpha Cronbach's Questionnaire is evaluated and its value is more than 70%. Therefore, it confirms reliability. And formal validity has been used to determine the validity of the questionnaire, and the validity of the questionnaire is confirmed by the fact that the score of the impact is greater than 1.5. In the present study, one variable analysis was used for central indicators, dispersion and deviation from symmetry, and a general picture of the society was obtained. Also, two variables were analyzed to test the hypotheses; measure the correlation coefficient between variables using structural equations, SPSS software was used. Finally, multivariate analysis was used with statistical techniques related to the SPLS structural equations to determine the effects of independent variables on the dependent variables of the research to determine the structural relationships between the variables. The results of the test of research hypotheses indicate that: 1. Supply chain management practices have a positive impact on the competitive advantage of the Mammoth industrial complex. 2. Supply chain management practices have a positive impact on the performance of the Mammoth industrial complex. 3. Planning system Organizational resources have a positive impact on the performance of the Mammoth industrial complex. 4. The system of enterprise resource planning has a positive impact on Mamot's competitive advantage. 5.The competitive advantage has a positive impact on the performance of the Mammoth industrial complex 6.The system of enterprise resource planning Mamot Industrial Complex Supply Chain Management has a positive impact. The above results indicate that the system of enterprise resource planning and supply chain management has an impact on the competitive advantage and corporate performance of Mamot Company.Keywords: enterprise resource planning, supply chain management, competitive advantage, Mamot company performance
Procedia PDF Downloads 97889 Response of Planktonic and Aggregated Bacterial Cells to Water Disinfection with Photodynamic Inactivation
Authors: Thayse Marques Passos, Brid Quilty, Mary Pryce
Abstract:
The interest in developing alternative techniques to obtain safe water, free from pathogens and hazardous substances, is growing in recent times. The photodynamic inactivation of microorganisms (PDI) is a promising ecologically-friendly and multi-target approach for water disinfection. It uses visible light as an energy source combined with a photosensitiser (PS) to transfer energy/electrons to a substrate or molecular oxygen generating reactive oxygen species, which cause cidal effects towards cells. PDI has mainly been used in clinical studies and investigations on its application to disinfect water is relatively recent. The majority of studies use planktonic cells. However, in their natural environments, bacteria quite often do not occur as freely suspended cells (planktonic) but in cell aggregates that are either freely floating or attached to surfaces as biofilms. Microbes can form aggregates and biofilms as a strategy to protect them from environmental stress. As aggregates, bacteria have a better metabolic function, they communicate more efficiently, and they are more resistant to biocide compounds than their planktonic forms. Among the bacteria that are able to form aggregates are members of the genus Pseudomonas, they are a very diverse group widely distributed in the environment. Pseudomonas species can form aggregates/biofilms in water and can cause particular problems in water distribution systems. The aim of this study was to evaluate the effectiveness of photodynamic inactivation in killing a range of planktonic cells including Escherichia coli DSM 1103, Staphylococcus aureus DSM 799, Shigella sonnei DSM 5570, Salmonella enterica and Pseudomonas putida DSM 6125, and aggregating cells of Pseudomonas fluorescens DSM 50090, Pseudomonas aeruginosa PAO1. The experiments were performed in glass Petri dishes, containing the bacterial suspension and the photosensitiser, irradiated with a multi-LED (wavelengths 430nm and 660nm) for different time intervals. The responses of the cells were monitored using the pour plate technique and confocal microscopy. The study showed that bacteria belonging to Pseudomonads group tend to be more tolerant to PDI. While E. coli, S. aureus, S. sonnei and S. enterica required a dosage ranging from 39.47 J/cm2 to 59.21 J/cm2 for a 5 log reduction, Pseudomonads needed a dosage ranging from 78.94 to 118.42 J/cm2, a higher dose being required when the cells aggregated.Keywords: bacterial aggregation, photoinactivation, Pseudomonads, water disinfection
Procedia PDF Downloads 294888 Validity and Reliability of Communication Activities of Daily Living- Second Edition and Assessment of Language-related Functional Activities: Comparative Evidence from Arab Aphasics
Authors: Sadeq Al Yaari, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari
Abstract:
Background: Validation of communication activities of daily living-second edition (CADL-2) and assessment of language-related functional activities (ALFA) tests is a critical investment decision, and activities related to language impairments often are underestimated. Literature indicates that age factors, and gender differences may affect the performance of the aphasics. Thus, understanding these influential factors is highly important to neuropsycholinguists and speech language pathologists (SLPs). Purpose: The goal of this study is twofold: (1) to in/validate CADL-2 and ALFA tests, and (2) to investigate whether or not the two assessment tests are reliable. Design: A comparative study is made between the results obtained from the analyses of the Arabic versions of CADL-2 and ALFA tests. Participants: The communication activities of daily-living and language-related functional activities were assessed from the obtained results of 100 adult aphasics (50 males, 50 females; ages 16 to 65). Procedures: Firstly, the two translated and standardized Arabic versions of CADL-2 and ALFA tests were introduced to the Arab aphasics under investigation. Armed with the new two versions of the tests, one of the researchers assessed the language-related functional communication and activities. Outcomes drawn from the obtained analysis of the comparative studies were then qualitatively and statistically analyzed. Main outcomes and Results: Regarding the validity of CADL-2 and ALFA, it is found that …. Is more valid in both pre-and posttests. Concerning the reliability of the two tests, it is found that ….is more reliable in both pre-and-posttests which undoubtedly means that …..is more trustable. Nor must we forget to indicate here that the relationship between age and gender was very weak due to that no remarkable gender differences between the two in both CADL-2 and ALFA pre-and-posttests. Conclusions & Implications: CADL-2 and ALFA tests were found to be valid and reliable tests. In contrast to previous studies, age and gender were not significantly associated with the results of validity and reliability of the two assessment tests. In clearer terms, age and gender patterns do not affect the validation of these two tests. Future studies might focus on complex questions including the use of CADL-2 and ALFA functionally; how gender and puberty influence the results in case the sample is large; the effects of each type of aphasia on the final outcomes, and measurements’ results of imaging techniques.Keywords: CADL-2, ALFA, comparison, language test, arab aphasics, validity, reliability, neuropsycholinguistics, comparison
Procedia PDF Downloads 35