Search results for: testing matrix
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5014

Search results for: testing matrix

724 Geochemical Characterization for Identification of Hydrocarbon Generation: Implication of Unconventional Gas Resources

Authors: Yousif M. Makeen

Abstract:

This research will address the processes of geochemical characterization and hydrocarbon generation process occurring within hydrocarbon source and/or reservoir rocks. The geochemical characterization includes organic-inorganic associations that influence the storage capacity of unconventional hydrocarbon resources (e.g. shale gas) and the migration process of oil/gas of the petroleum source/reservoir rocks. Kerogen i.e. the precursor of petroleum, occurs in various forms and types, may either be oil-prone, gas-prone, or both. China has a number of petroleum-bearing sedimentary basins commonly associated with shale gas, oil sands, and oil shale. Taken Sichuan basin as a selected basin in this study, the Sichuan basin has recorded notable successful discoveries of shale gas especially in the marine shale reservoirs within the area. However, a notable discoveries of lacustrine shale in the North-Este Fuling area indicate the accumulation of shale gas within non-marine source rock. The objective of this study is to evaluate the hydrocarbon storage capacity, generation, and retention processes in the rock matrix of hydrocarbon source/reservoir rocks within the Sichuan basin using an advanced X-ray tomography 3D imaging computational technology, commonly referred to as Micro-CT, SEM (Scanning Electron Microscope), optical microscope as well as organic geochemical facilities (e.g. vitrinite reflectance and UV light). The preliminary results of this study show that the lacustrine shales under investigation are acting as both source and reservoir rocks, which are characterized by very fine grains and very low permeability and porosity. Three pore structures have also been characterized in the study in the lacustrine shales, including organic matter pores, interparticle pores and intraparticle pores using x-ray Computed Tomography (CT). The benefits of this study would be a more successful oil and gas exploration and higher recovery factor, thus having a direct economic impact on China and the surrounding region. Methodologies: SRA TOC/TPH or Rock-Eval technique will be used to determine the source rock richness (S1 and S2) and Tmax. TOC analysis will be carried out using a multi N/C 3100 analyzer. The SRA and TOC results were used in calculating other parameters such as hydrogen index (HI) and production index (PI). This analysis will indicate the quantity of the organic matter. Minimum TOC limits generally accepted as essential for a source-rock are 0.5% for shales and 0.2% for carbonates. Contributions: This research could solve issues related to oil potential, provide targets, and serve as a pathfinder to future exploration activity in the Sichuan basin.

Keywords: shale gas, unconventional resources, organic chemistry, Sichuan basin

Procedia PDF Downloads 14
723 Sustaining the Organizational Performance as Well as Maintaining Employee Satisfaction by Governing Work Life Balance

Authors: I. Gupta, C. Kathpal

Abstract:

Introduction: Time is really the only capital that any human being has, and the only thing he cannot afford to lose. Work life balance is a contested term on which researchers have begun to study in 1960s. Work-life balance refers to how people allocate time between their jobs and other pursuits, such as family, hobbies, and community involvement and includes the mental health fitness of the employees so that the future goal of organization to sustain the employees and earning profits can be achieved. Every organization primarily involves making a parity between the employees' work and their personal life by contributing the maximum. Aims and Objectives: The aim of the present study is to examine the impact of work-life balance as well as employee satisfaction on the organizational performance by evaluating the inter-related factors in order to maintain the healthy growth of concerns. Materials and Methods: To realize the aim of the study, an unstructured questionnaire, as well as face to face interview, was conducted from 100 persons which consisted majority of male members of top as well as middle level positions in the various organizations. The prime source of data collection was primary; however, the study has also used the theoretical contribution done in this respective field by various researchers. Results: Majority of the respondents were males(80%) from age group of 25-45. The collected data was analyzed through hypothesis testing statistical techniques such as correlation analysis, single regression analysis and ANOVA which has rejected the null hypothesis that there is no relation between work-life interface and organizational performance. The major finding of this study is that work-life balance is directly related to the organizations performance. The results show that the organization which works on the employee satisfaction earns more. Along with, there is a reduction of turnout rates, absenteeism, moreover, enhancement of productivity as well as revenue of corporations. Conclusion: The present study reflects that the disparity in the work-life balance gives invitation to many disorders either mental or physical which leads the dearth in performance. As a result, not only employees, however, organizations also suffers which is clearly shown in the interviews conducted face to face with employees. The study is not targeting the particular class of audience; however, it brings out benefits to the masses.

Keywords: work-life balance, performance, culture, organization, satisfaction

Procedia PDF Downloads 106
722 Investigation Two Polymorphism of hTERT Gene (Rs 2736098 and Rs 2736100) and miR- 146a rs2910164 Polymorphism in Cervical Cancer

Authors: Hossein Rassi, Alaheh Gholami Roud-Majany, Zahra Razavi, Massoud Hoshmand

Abstract:

Cervical cancer is multi step disease that is thought to result from an interaction between genetic background and environmental factors. Human papillomavirus (HPV) infection is the leading risk factor for cervical intraepithelial neoplasia (CIN)and cervical cancer. In other hand, some of hTERT and miRNA polymorphism may plays an important role in carcinogenesis. This study attempts to clarify the relation of hTERT genotypes and miR-146a genotypes in cervical cancer. Forty two archival samples with cervical lesion retired from Khatam hospital and 40 sample from healthy persons used as control group. A simple and rapid method was used to detect the simultaneous amplification of the HPV consensus L1 region and HPV-16,-18, -11, -31, 33 and -35 along with the b-globin gene as an internal control. We use Multiplex PCR for detection of hTERT and miR-146a rs2910164 genotypes in our lab. Finally, data analysis was performed using the 7 version of the Epi Info(TM) 2012 software and test chi-square(x2) for trend. Cervix lesions were collected from 42 patients with Squamous metaplasia, cervical intraepithelial neoplasia, and cervical carcinoma. Successful DNA extraction was assessed by PCR amplification of b-actin gene (99bp). According to the results, hTERT ( rs 2736098) GG genotype and miR-146a rs2910164 CC genotype was significantly associated with increased risk of cervical cancer in the study population. In this study, we detected 13 HPV 18 from 42 cervical cancer. The connection between several SNP polymorphism and human virus papilloma in rare researches were seen. The reason of these differences in researches' findings can result in different kinds of races and geographic situations and also differences in life grooves in every region. The present study provided preliminary evidence that a p53 GG genotype and miR-146a rs2910164 CC genotype may effect cervical cancer risk in the study population, interacting synergistically with HPV 18 genotype. Our results demonstrate that the testing of hTERT rs 2736098 genotypes and miR-146a rs2910164 genotypes in combination with HPV18 can serve as major risk factors in the early identification of cervical cancers. Furthermore, the results indicate the possibility of primary prevention of cervical cancer by vaccination against HPV18 in Iran.

Keywords: polymorphism of hTERT gene, miR-146a rs2910164 polymorphism, cervical cancer, virus

Procedia PDF Downloads 302
721 Piql Preservation Services - A Holistic Approach to Digital Long-Term Preservation

Authors: Alexander Rych

Abstract:

Piql Preservation Services (“Piql”) is a turnkey solution designed for secure, migration-free long- term preservation of digital data. Piql sets an open standard for long- term preservation for the future. It consists of equipment and processes needed for writing and retrieving digital data. Exponentially growing amounts of data demand for logistically effective and cost effective processes. Digital storage media (hard disks, magnetic tape) exhibit limited lifetime. Repetitive data migration to overcome rapid obsolescence of hardware and software bears accelerated risk of data loss, data corruption or even manipulation and adds significant repetitive costs for hardware and software investments. Piql stores any kind of data in its digital as well as analog form securely for 500 years. The medium that provides this is a film reel. Using photosensitive film polyester base, a very stable material that is known for its immutability over hundreds of years, secure and cost-effective long- term preservation can be provided. The film reel itself is stored in a packaging capable of protecting the optical storage medium. These components have undergone extensive testing to ensure longevity of up to 500 years. In addition to its durability, film is a true WORM (write once- read many) medium. It therefore is resistant to editing or manipulation. Being able to store any form of data onto the film makes Piql a superior solution for long-term preservation. Paper documents, images, video or audio sequences – all of those file formats and documents can be preserved in its native file structure. In order to restore the encoded digital data, only a film scanner, a digital camera or any appropriate optical reading device will be needed in the future. Every film reel includes an index section describing the data saved on the film. It also contains a content section carrying meta-data, enabling users in the future to rebuild software in order to read and decode the digital information.

Keywords: digital data, long-term preservation, migration-free, photosensitive film

Procedia PDF Downloads 379
720 Accelerator Mass Spectrometry Analysis of Isotopes of Plutonium in PM₂.₅

Authors: C. G. Mendez-Garcia, E. T. Romero-Guzman, H. Hernandez-Mendoza, C. Solis, E. Chavez-Lomeli, E. Chamizo, R. Garcia-Tenorio

Abstract:

Plutonium is present in different concentrations in the environment and biological samples related to nuclear weapons testing, nuclear waste recycling and accidental discharges of nuclear plants. This radioisotope is considered the most radiotoxic substance, particularly when it enters the human body through inhalation of powders insoluble or aerosols. This is the main reason of the determination of the concentration of this radioisotope in the atmosphere. Besides that, the isotopic ratio of ²⁴⁰Pu/²³⁹Pu provides information about the origin of the source. PM₂.₅ sampling was carried out in the Metropolitan Zone of the Valley of Mexico (MZVM) from February 18th to March 17th in 2015 on quartz filter. There have been significant developments recently due to the establishment of new methods for sample preparation and accurate measurement to detect ultra trace levels as the plutonium is found in the environment. The accelerator mass spectrometry (AMS) is a technique that allows measuring levels of detection around of femtograms (10-15 g). The AMS determinations include the chemical isolation of Pu. The Pu separation involved an acidic digestion and a radiochemical purification using an anion exchange resin. Finally, the source is prepared, when Pu is pressed in the corresponding cathodes. According to the author's knowledge on these aerosols showed variations on the ²³⁵U/²³⁸U ratio of the natural value, suggesting that could be an anthropogenic source altering it. The determination of the concentration of the isotopes of Pu can be a useful tool in order the clarify this presence in the atmosphere. The first results showed a mean value of activity concentration of ²³⁹Pu of 280 nBq m⁻³ thus the ²⁴⁰Pu/²³⁹Pu was 0.025 corresponding to the weapon production source; these results corroborate that there is an anthropogenic influence that is increasing the concentration of radioactive material in PM₂.₅. According to the author's knowledge in Total Suspended Particles (TSP) have been reported activity concentrations of ²³⁹⁺²⁴⁰Pu around few tens of nBq m⁻³ and 0.17 of ²⁴⁰Pu/²³⁹Pu ratios. The preliminary results in MZVM show high activity concentrations of isotopes of Pu (40 and 700 nBq m⁻³) and low ²⁴⁰Pu/²³⁹Pu ratio than reported. These results are in the order of the activity concentrations of Pu in weapons-grade of high purity.

Keywords: aerosols, fallout, mass spectrometry, radiochemistry, tracer, ²⁴⁰Pu/²³⁹Pu ratio

Procedia PDF Downloads 154
719 Emergency Physician Performance for Hydronephrosis Diagnosis and Grading Compared with Radiologist Assessment in Renal Colic: The EPHyDRA Study

Authors: Sameer A. Pathan, Biswadev Mitra, Salman Mirza, Umais Momin, Zahoor Ahmed, Lubna G. Andraous, Dharmesh Shukla, Mohammed Y. Shariff, Magid M. Makki, Tinsy T. George, Saad S. Khan, Stephen H. Thomas, Peter A. Cameron

Abstract:

Study objective: Emergency physician’s (EP) ability to identify hydronephrosis on point-of-care ultrasound (POCUS) has been assessed in the past using CT scan as the reference standard. We aimed to assess EP interpretation of POCUS to identify and grade the hydronephrosis in a direct comparison with the consensus-interpretation of POCUS by radiologists, and also to compare the EP and radiologist performance using CT scan as the criterion standard. Methods: Using data from a POCUS databank, a prospective interpretation study was conducted at an urban academic emergency department. All POCUS exams were performed on patients presenting with renal colic to the ED. Institutional approval was obtained for conducting this study. All the analyses were performed using Stata MP 14.0 (Stata Corp, College Station, Texas). Results: A total of 651 patients were included, with paired sets of renal POCUS video clips and the CT scan performed at the same ED visit. Hydronephrosis was reported in 69.6% of POCUS exams by radiologists and 72.7% of CT scans (p=0.22). The κ for consensus interpretation of POCUS between the radiologists to detect hydronephrosis was 0.77 (0.72 to 0.82) and weighted κ for grading the hydronephrosis was 0.82 (0.72 to 0.90), interpreted as good to very good. Using CT scan findings as the criterion standard, Eps had an overall sensitivity of 81.1% (95% CI: 79.6% to 82.5%), specificity of 59.4% (95% CI: 56.4% to 62.5%), PPV of 84.3% (95% CI: 82.9% to 85.7%), and NPV of 53.8% (95% CI: 50.8% to 56.7%); compared to radiologist sensitivity of 85.0% (95% CI: 82.5% to 87.2%), specificity of 79.7% (95% CI: 75.1% to 83.7%), PPV of 91.8% (95% CI: 89.8% to 93.5%), and NPV of 66.5% (95% CI: 61.8% to 71.0%). Testing for a report of moderate or high degree of hydronephrosis, specificity of EP was 94.6% (95% CI: 93.7% to 95.4%) and to 99.2% (95% CI: 98.9% to 99.5%) for identifying severe hydronephrosis alone. Conclusion: EP POCUS interpretations were comparable to the radiologists for identifying moderate to severe hydronephrosis using CT scan results as the criterion standard. Among patients with moderate or high pre-test probability of ureteric calculi, as calculated by the STONE-score, the presence of moderate to severe (+LR 6.3 and –LR 0.69) or severe hydronephrosis (+LR 54.4 and –LR 0.57) was highly diagnostic of the stone disease. Low dose CT is indicated in such patients for evaluation of stone size and location.

Keywords: renal colic, point-of-care, ultrasound, bedside, emergency physician

Procedia PDF Downloads 264
718 Thin Film Thermoelectric Generator with Flexible Phase Change Material-Based Heatsink

Authors: Wu Peiqin

Abstract:

Flexible thermoelectric devices are light and flexible, which can be in close contact with any shape of heat source surfaces to minimize heat loss and achieve efficient energy conversion. Among the wide application fields, energy harvesting via flexible thermoelectric generators can adapt to a variety of curved heat sources (such as human body, circular tubes, and surfaces of different shapes) and can drive low-power electronic devices, exhibiting one of the most promising technologies in self-powered systems. The heat flux along the cross-section of the flexible thin-film generator is limited by the thickness, so the temperature difference decreases during the generation process, and the output power is low. At present, most of the heat flow directions of the thin film thermoelectric generator are along the thin-film plane; however, this method is not suitable for attaching to the human body surface to generate electricity. In order to make the film generator more suitable for thermoelectric generation, it is necessary to apply a flexible heatsink on the air sides with the film to maintain the temperature difference. In this paper, Bismuth telluride thermoelectric paste was deposited on polyimide flexible substrate by a screen printing method, and the flexible thermoelectric film was formed after drying. There are ten pairs of thermoelectric legs. The size of the thermoelectric leg is 20 x 2 x 0.1 mm, and adjacent thermoelectric legs are spaced 2 mm apart. A phase change material-based flexible heatsink was designed and fabricated. The flexible heatsink consists of n-octadecane, polystyrene, and expanded graphite. N-octadecane was used as the thermal storage material, polystyrene as the supporting material, and expanded graphite as the thermally conductive additive. The thickness of the flexible phase change material-based heatsink is 2mm. A thermoelectric performance testing platform was built, and its output performance was tested. The results show that the system can generate an open-circuit output voltage of 3.89 mV at a temperature difference of 10K, which is higher than the generator without a heatsink. Therefore, the flexible heatsink can increase the temperature difference between the two ends of the film and improve the output performance of the flexible film generator. This result promotes the application of the film thermoelectric generator in collecting human heat for power generation.

Keywords: flexible thermoelectric generator, screen printing, PCM, flexible heatsink

Procedia PDF Downloads 85
717 An Effective Preventive Program of HIV/AIDS among Hill Tribe Youth, Thailand

Authors: Tawatchai Apidechkul

Abstract:

This operational research was conducted and divided into two phases: the first phase aimed to determine the risk behaviors used a cross-sectional study design, following by the community participatory research design to develop the HIV/AIDS preventive model among the Akha youths. The instruments were composed of completed questionnaires and assessment forms that were tested for validity and reliability before use. Study setting was Jor Pa Ka and Saen Suk Akha villages, Mae Chan District, Chiang Rai, Thailand. Study sample were the Akha youths lived in the villages. Means and chi-square test were used for the statistical testing. Results: Akha youths in the population mobilization villages live in agricultural families with low income and circumstance of narcotic drugs. The average age was 16 (50.00%), 51.52% Christian, 48.80% completed secondary school, 43.94% had annual family income of 30,000-40,000 baht. Among males, 54.54% drank, 39.39% smoked, 7.57% used amphetamine, first sexual intercourse reported at 14 years old, 50.00% had 2-5 partners, 62.50% had unprotected sex (no-condom). Reasons of unprotected sex included not being able to find condom, unawareness of need to use condoms, and dislike. 28.79% never been received STI related information, 6.06% had STI. Among females, 15.15% drank, 28.79% had sexual intercourse and had first sexual intercourse less than 15 year old. 40.00% unprotected sex (no-condom), 10.61% never been received STI related information, and 4.54% had STI. The HIV/AIDS preventive model contained two components. Peer groups among the youths were built around interests in sports. Improving knowledge would empower their capability and lead to choices that would result in HIV/AIDS prevention. The empowering model consisted of 4 courses: a. human reproductive system and its hygiene, b. risk-avoid skills, family planning, and counseling techniques, c. HIV/AIDS and other STIs, d. drugs and related laws and regulations. The results of the activities found that youths had a greater of knowledge and attitude levels for HIV/AIDS prevention with statistical significance (χ2-τεστ= 12.87, p-value= 0.032 and χ2-τεστ= 9.31, p-value<0.001 respectively). A continuous and initiative youths capability development program is the appropriate process to reduce the spread of HIV/AIDS in youths, particularly in the population who have the specific of language and culture.

Keywords: AIV/AIDS, preventive program, effective, hill tribe

Procedia PDF Downloads 353
716 Validation of Two Field Base Dynamic Balance Tests in the Activation of Selected Hip and Knee Stabilizer Muscles

Authors: Mariam A. Abu-Alim

Abstract:

The purpose of this study was to validate muscle activation amplitudes of two field base dynamic balance tests that are used as strengthen and motor control exercises too in the activation of selected hip and knee stabilizer muscles. Methods: Eighteen college-age females students (21±2 years; 65.6± 8.7 kg; 169.7±8.1 cm) who participated at least for 30 minutes in physical activity most days of the week volunteered. The wireless BIOPAC (MP150, BIOPAC System. Inc, California, USA) surface electromyography system was used to validate the activation of the Gluteus Medius and the Adductor Magnus of hip stabilizer muscles; and the Hamstrings, Quadriceps, and the Gastrocnemius of the knee stabilizer muscles. Surface electrodes (EL 503, BIOPAC, System. Inc) connected to dual wireless EMG BioNormadix Transmitters were place on selected muscles of participants dominate side. Manual muscle testing was performed to obtain the maximal voluntary isometric contraction (MVIC) in which all collected muscle activity data during the three reaching direction: anterior, posteromedial, posterolateral of the Star Excursion Balance Test (SEBT) and the Y-balance Test (YBT) data could be normalized. All participants performed three trials for each reaching direction of the SEBT and the YBT. The domanial leg trial for each participant was selected for analysis which was also the standing leg. Results: the selected hip stabilizer muscles (Gluteus Medius, Adductor Magnus) were both greater than 100%MVIC during the performance of the SEBT and in all three directions. Whereas, selected knee stabilizer muscles had greater activation 0f 100% MVIC and were significantly more activated during the performance of the YBT test in all three reaching directions. The results showed that the posterolateral and the postmedial reaching directions for both dynamic balance tests had greater activation levels and greater than 200%MVIC for all tested muscles expect of the hamstrings. Conclusion: the results of this study showed that the SEBT and the YBT had validated high levels of muscular activity for the hip and the knee stabilizer muscles; which can be used to represent the improvement, strength, control and the decreasing in the injury levels. Since these selected hip and knee stabilizer muscles, represent 35% of all athletic injuries depending on the type of sport.

Keywords: dynamic balance tests, electromyography, hip stabilizer muscles, nee stabilizer muscles

Procedia PDF Downloads 135
715 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 266
714 Estimation of Rock Strength from Diamond Drilling

Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi

Abstract:

The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.

Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength

Procedia PDF Downloads 120
713 A Risk-Based Approach to Construction Management

Authors: Chloe E. Edwards, Yasaman Shahtaheri

Abstract:

Risk management plays a fundamental role in project planning and delivery. The purpose of incorporating risk management into project management practices is to identify and address uncertainties related to key project-related activities. The uncertainties, known as risk events, can relate to project deliverables that are quantifiable and are often measured by impact to project schedule, cost, or environmental impact. Risk management should be incorporated as an iterative practice throughout the planning, execution, and commissioning phases of a project. This paper specifically examines how risk management contributes to effective project planning and delivery through a case study of a transportation project. This case study focused solely on impacts to project schedule regarding three milestones: readiness for delivery, readiness for testing and commissioning, and completion of the facility. The case study followed the ISO 31000: Risk Management – Guidelines. The key factors that are outlined by these guidelines include understanding the scope and context of the project, conducting a risk assessment including identification, analysis, and evaluation, and lastly, risk treatment through mitigation measures. This process requires continuous consultation with subject matter experts and monitoring to iteratively update the risks accordingly. The risk identification process led to a total of fourteen risks related to design, permitting, construction, and commissioning. The analysis involved running 1,000 Monte Carlo simulations through @RISK 8.0 Industrial software to determine potential milestone completion dates based on the project baseline schedule. These dates include the best case, most likely case, and worst case to provide an estimated delay for each milestone. Evaluation of these results provided insight into which risks were the highest contributors to the projected milestone completion dates. Based on the analysis results, the risk management team was able to provide recommendations for mitigation measures to reduce the likelihood of risks occurring. The risk management team also provided recommendations for managing the identified risks and project activities moving forward to meet the most likely or best-case milestone completion dates.

Keywords: construction management, monte carlo simulation, project delivery, risk assessment, transportation engineering

Procedia PDF Downloads 94
712 Family Planning and HIV Integration: A One-stop Shop Model at Spilhaus Clinic, Harare Zimbabwe

Authors: Mercy Marimirofa, Farai Machinga, Alfred Zvoushe, Tsitsidzaishe Musvosvi

Abstract:

The Government of Zimbabwe embarked on integrating family planning with Sexually Transmitted Infection (STI) and Human Immunodeficiency Virus (HIV) services in May 2020 with support from the World Health Organization (WHO). There was high HIV prevalence, incidence rates and STI infections among women attending FP clinics. Spilhaus is a specialized center of excellence clinic which offers a range of sexual reproductive health services. HIV services were limited to testing only, and clients were referred to other facilities for further management. Integration of services requires that all the services be available at one point so that clients will access them during their visit to the facility. Objectives: The study was conducted to assess the impact the one-stop-shop model has made in accessing integrated Family Planning services and sexual reproductive health services compared to the supermarket approach. It also assessed the relationship family planning services have with other sexual reproductive health services. Methods: A secondary data analysis was conducted at Spilhaus clinic in Harare using family planning registers and HIV services registers comparing years 2019 and 2021. A 2 sample t-test was used to determine the difference in clients accessing the services under the two models. A Spearman’s rank correlation was used to determine if accessing family planning services has a relationship with other sexual reproductive health services. Results: In 2019, 7,548 clients visited the Spilhaus clinic compared to 8,265 during the period January to December 2021. The median age for all clients accessing services was 32 years. An increase of 69% in the number of services accessed was recorded from 2019 to 2021. More services were accessed in 2021. There was no difference in the number of clients accessing family planning services cervical cancer, and HIV services. A difference was found in the number of clients who were offered STI screening services. There was also a relationship between accessing family planning services and STI screening services (ρ = 0.729, p-value=0.006). Conclusion: Programming towards SRH services was a great achievement, the use of an integrated approach proved to be cost-effective as it minimised the required resources for separate programs. Clients accessed important health needs at once. The integration of these services provided an opportunity to offer comprehensive information which addressed an individual’s sexual reproductive health needs.

Keywords: intergration, one stop shop, family planning, reproductive health

Procedia PDF Downloads 48
711 Mechanical and Material Characterization on the High Nitrogen Supersaturated Tool Steels for Die-Technology

Authors: Tatsuhiko Aizawa, Hiroshi Morita

Abstract:

The tool steels such as SKD11 and SKH51 have been utilized as punch and die substrates for cold stamping, forging, and fine blanking processes. The heat-treated SKD11 punches with the hardness of 700 HV wrought well in the stamping of SPCC, normal steel plates, and non-ferrous alloy such as a brass sheet. However, they suffered from severe damage in the fine blanking process of smaller holes than 1.5 mm in diameter. Under the high aspect ratio of punch length to diameter, an elastoplastic bucking of slender punches occurred on the production line. The heat-treated punches had a risk of chipping at their edges. To be free from those damages, the blanking punch must have sufficient rigidity and strength at the same time. In the present paper, the small-hole blanking punch with a dual toughness structure was proposed to provide a solution to this engineering issue in production. The low-temperature plasma nitriding process was utilized to form the nitrogen supersaturated thick layer into the original SKD11 punch. Through the plasma nitriding at 673 K for 14.4 ks, the nitrogen supersaturated layer, with the thickness of 50 μm and without nitride precipitates, was formed as a high nitrogen steel (HNS) layer surrounding the original SKD11 punch. In this two-zone structured SKD11 punch, the surface hardness increased from 700 HV for the heat-treated SKD11 to 1400 HV. This outer high nitrogen SKD11 (HN-SKD11) layer had a homogeneous nitrogen solute depth profile with a nitrogen solute content plateau of 4 mass% till the border between the outer HN-SKD11 layer and the original SKD11 matrix. When stamping the brass sheet with the thickness of 1 mm by using this dually toughened SKD11 punch, the punch life was extended from 500 K shots to 10000 K shots to attain a much more stable production line to yield the brass American snaps. Furthermore, with the aid of the masking technique, the punch side surface layer with the thickness of 50 μm was modified by this high nitrogen super-saturation process to have a stripe structure where the un-nitrided SKD11 and the HN-SKD11 layers were alternatively aligned from the punch head to the punch bottom. This flexible structuring promoted the mechanical integrity of total rigidity and toughness as a punch with an extremely small diameter.

Keywords: high nitrogen supersaturation, semi-dry cold stamping, solid solution hardening, tool steel dies, low temperature nitriding, dual toughness structure, extremely small diameter punch

Procedia PDF Downloads 78
710 The Role of Evaluation for Effective and Efficient Change in Higher Education Institutions

Authors: Pattaka Sa-Ngimnet

Abstract:

That the University as we have known it is no longer serving the needs of the vast majority of students and potential students has been a topic of much discussion. Institutions of higher education, in this age of global culture, are in a process of metamorphosis. Technology is being used to allow more students, older students, working students and disabled students, who cannot attend conventional classes, to have greater access to higher education through the internet. But change must come about only after much evaluation and experimentation or education will simply become a commodity as, in some cases, it already has. This paper will be concerned with the meaning and methods of change and evaluation as they are applied to institutions of higher education. Organization’s generally have different goals and different approaches in order to be successful. However, the means of reaching those goals requires rational and effective planning. Any plans for successful change in any institution must take into account both effectiveness and efficiency and the differences between them. “Effectiveness” refers to an adequate means of achieving an objective. “Efficiency” refers to the ability to achieve an objective without waste of time or resources (The Free Dictionary). So an effective means may not be efficient and an efficient means may not be effective. The goal is to reach a synthesis of effectiveness and efficiency that will maximize both to the extent each is limited by the other. This focus of this paper then is to determine how an educational institution can become either successful or oppressive depending on the kinds of planning, evaluating and changes that operate by and on the administration. If the plan is concerned only with efficiency, the institution can easily become oppressive and lose sight of its purpose of educating students. If it is overly concentrated on effectiveness, the students may receive a superior education in the short run but the institution will face operating difficulties. In becoming only goal oriented, institutions also face problems. Simply stated, if the institution reaches its goals, the stake holders may become satisfied and fail to change and keep up with the needs of the times. So goals should be seen only as benchmarks in a process of becoming even better in providing quality education. Constant and consistent evaluation is the key to making all these factors come together in a successful process of planning, testing and changing the plans as needed. The focus of the evaluation has to be considered. Evaluations must take into account progress and needs of students, methods and skills of instructors, resources available from the institution and the styles and objectives of administrators. Thus the role of evaluation is pivotal in providing for the maximum of both effective and efficient change in higher education institutions.

Keywords: change, effectiveness, efficiency, education

Procedia PDF Downloads 305
709 Impact of Autoclave Sterilization of Gelatin on Endotoxin Level and Physical Properties Compared to Surfactant Purified Gelatins

Authors: Jos Olijve

Abstract:

Introduction and Purpose: Endotoxins are found in the outer membrane of gram-negative bacteria and have profound in vitro and in vivo responses. They can trigger strong immune responses and negatively affect various cellar activities particular cells expressing toll-like receptors. They are therefore unwanted contaminants of biomaterials sourced from natural raw materials, and their activity must be as low as possible. Collagen and gelatin are natural extracellular matrix components and have, due to their low allergenic potential, suitable biological properties, and tunable physical characteristics, high potential in biomedical applications. The purpose of this study was to determine the influence of autoclave sterilization of gelatin on physical properties and endotoxin level compared to surfactant purified gelatin. Methods: Type A gelatin from Sigma-Aldrich (G1890) with endotoxin level of 35000 endotoxin units (EU) per gram gelatin and type A gelatins from Rousselot Gent with endotoxin activity of 30000 EU per gram were used. A 10 w/w% G1890 gelatin solution was autoclave sterilized during 30 minutes at 121°C and 1 bar over pressure. The physical properties and the endotoxin level of the sterilized G1890 gelatin were compared to a type A gelatin from Rousselot purified with Triton X100 surfactant. The Triton X100 was added to a concentration of 0.5 w/w% which is above the critical micellar concentration. The gelatin surfactant mixtures were kept for 30-45 minutes under constant stirring at 55-60°C. The Triton X100 was removed by active carbon filtration. The endotoxin levels of the gelatins were measured using the Endozyme recombinant factor C method from Hyglos GmbH (Germany). Results and Discussion: Autoclave sterilization significantly affect the physical properties of gelatin. Molecular weight of G1890 decreased from 140 to 50kDa, and gel strength decreased from 300 to 40g. The endotoxin level of the gelatin reduced after sterilization from 35000 EU/g to levels of 400-500 EU/g. These endotoxin levels are however still far above the upper endotoxin level of 0.05 EU/ml, which resembles 5 EU/g gelatin based on a 1% gelatin solution, to avoid cell proliferation alteration. Molecular weight and gel strength of Rousselot gelatin was not altered after Triton X100 purification and remained 150kDa and 300g respectively. The endotoxin levels of Triton X100 purified Rousselot gelatin was < 5EU/g gelatin. Conclusion: Autoclave sterilization of gelatin is, in comparison to Triton X100 purification, not efficient to inactivate endotoxin levels in gelatin to levels below the upper limit to avoid cell proliferation alteration. Autoclave sterilization gave a significant decrease in molecular weight and gel strength which makes autoclave sterilized gelatin, in comparison to Triton X100 purified gelatin, not suitable for 3D printing.

Keywords: endotoxin, gelatin, molecular weight, sterilization, Triton X100

Procedia PDF Downloads 215
708 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi

Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur

Abstract:

Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.

Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi

Procedia PDF Downloads 229
707 Extreme Heat and Workforce Health in Southern Nevada

Authors: Erick R. Bandala, Kebret Kebede, Nicole Johnson, Rebecca Murray, Destiny Green, John Mejia, Polioptro Martinez-Austria

Abstract:

Summertemperature data from Clark County was collected and used to estimate two different heat-related indexes: the heat index (HI) and excess heat factor (EHF). These two indexes were used jointly with data of health-related deaths in Clark County to assess the effect of extreme heat on the exposed population. The trends of the heat indexes were then analyzed for the 2007-2016 decadeandthe correlation between heat wave episodes and the number of heat-related deaths in the area was estimated. The HI showed that this value has increased significantly in June, July, and August over the last ten years. The same trend was found for the EHF, which showed a clear increase in the severity and number of these events per year. The number of heat wave episodes increased from 1.4 per year during the 1980-2016 period to 1.66 per yearduring the 2007-2016 period. However, a different trend was found for heat-wave-event duration, which decreasedfrom an average of 20.4 days during the trans-decadal period (1980-2016) to 18.1 days during the most recent decade(2007-2016). The number of heat-related deaths was also found to increase from 2007 to 2016, with 2016 with the highest number of heat-related deaths. Both HI and the number of deaths showeda normal-like distribution for June, July, and August, with the peak values reached in late July and early August. The average maximum HI values better correlated with the number of deaths registered in Clark County than the EHF, probably because HI uses the maximum temperature and humidity in its estimation,whereas EHF uses the average medium temperature. However, it is worth testing the EHF of the study zone because it was reported to fit properly in the case of heat-related morbidity. For the overall period, 437 heat-related deaths were registered in Clark County, with 20% of the deaths occurring in June, 52% occurring in July, 18% occurring in August,and the remaining 10% occurring in the other months of the year. The most vulnerable subpopulation was people over 50 years old, for which 76% of the heat-related deaths were registered.Most of the cases were associated with heart disease preconditions. The second most vulnerable subpopulation was young adults (20-50), which accounted for 23% of the heat-related deaths. These deathswere associated with alcoholic/illegal drug intoxication.

Keywords: heat, health, hazards, workforce

Procedia PDF Downloads 94
706 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor

Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang

Abstract:

To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.

Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel

Procedia PDF Downloads 339
705 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes

Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen

Abstract:

Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.

Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades

Procedia PDF Downloads 146
704 Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV

Authors: Joyce K. Anastasi, Bernadette Capili

Abstract:

Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV Our research team focuses on developing and testing protocols to manage chronic symptoms. For many years, our team designed and implemented symptom management studies for people living with HIV (PLWH). We identify symptoms that are not curative and are not adequately controlled by conventional therapies. As an exemplar, we describe how we successfully engaged PLWH in developing and refining our research feasibility protocol for distal sensory peripheral neuropathy (DSP) associated with HIV. With input from PLWH with DSP, our research received National Institutes of Health (NIH) research funding support. Significance: DSP is one of the most common neurologic complications in HIV. It is estimated that DSP affects 21% to 50% of PLWH. The pathogenesis of DSP in HIV is complex and unclear. Proposed mechanisms include cytokine dysregulation, viral protein-produced neurotoxicity, and mitochondrial dysfunction associated with antiretroviral medications. There are no FDA-approved treatments for DSP in HIV. Purpose: Aims: 1) to explore the impact of DSP on the lives of PLWH, 2) to identify patients’ perspectives on successful treatments for DSP, 3) to identify interventions considered feasible and sensitive to the needs of PLWH with DSP, and 4) to obtain participant input for protocol/study design. Description of Process: We conducted a needs assessment with PLWH with DSP. From our needs assessment, we learned from the patients’ perspective detailed descriptions of their symptoms; physical functioning with DSP; self-care remedies tried, and desired interventions. We also asked about protocol scheduling, instrument clarity, study compensation, study-related burdens, and willingness to participate in a randomized controlled trial (RCT) with a placebo and a waitlist group. Implications: We incorporated many of the suggestions learned from the need assessment. We developed and completed a feasibility study that provided us with invaluable information that informed subsequent NIH-funded studies. In addition to our extensive clinical and research experience working with PLWH, learning from the patient perspective helped in developing our protocol and promoting a successful plan for recruitment and retention of study participants.

Keywords: clinical trial development, peripheral neuropathy, traditional medicine, HIV, AIDS

Procedia PDF Downloads 74
703 Biological Control of Karnal Bunt by Pseudomonas fluorescens

Authors: Geetika Vajpayee, Sugandha Asthana, Pratibha Kumari, Shanthy Sundaram

Abstract:

Pseudomonas species possess a variety of promising properties of antifungal and growth promoting activities in the wheat plant. In the present study, Pseudomonas fluorescens MTCC-9768 is tested against plant pathogenic fungus Tilletia indica, causing Karnal bunt, a quarantine disease of wheat (Triticum aestivum) affecting kernels of wheat. It is one of the 1/A1 harmful diseases of wheat worldwide under EU legislation. This disease develops in the growth phase by the spreading of microscopically small spores of the fungus (teliospores) being dispersed by the wind. The present chemical fungicidal treatments were reported to reduce teliospores germination, but its effect is questionable since T. indica can survive up to four years in the soil. The fungal growth inhibition tests were performed using Dual Culture Technique, and the results showed inhibition by 82.5%. The interaction of antagonist bacteria-fungus causes changes in the morphology of hyphae, which was observed using Lactophenol cotton blue staining and Scanning Electron Microscopy (SEM). The rounded and swollen ends, called ‘theca’ were observed in interacted fungus as compared to control fungus (without bacterial interaction). This bacterium was tested for its antagonistic activity like protease, cellulose, HCN production, Chitinase, etc. The growth promoting activities showed increase production of IAA in bacteria. The bacterial secondary metabolites were extracted in different solvents for testing its growth inhibiting properties. The characterization and purification of the antifungal compound were done by Thin Layer Chromatography, and Rf value was calculated (Rf value = 0.54) and compared to the standard antifungal compound, 2, 4 DAPG (Rf value = 0.54). Further, the in vivo experiments showed a significant decrease in the severity of disease in the wheat plant due to direct injection method and seed treatment. Our results indicate that the extracted and purified compound from the antagonist bacteria, P. fluorescens MTCC-9768 may be used as a potential biocontrol agent against T. indica. This also concludes that the PGPR properties of the bacteria may be utilized by incorporating it into bio-fertilizers.

Keywords: antagonism, Karnal bunt, PGPR, Pseudomonas fluorescens

Procedia PDF Downloads 382
702 The Environmental Impact Assessment of Land Use Planning (Case Study: Tannery Industry in Al-Garma District)

Authors: Husam Abdulmuttaleb Hashim

Abstract:

The environmental pollution problems represent a great challenge to the world, threatening to destroy all the evolution that mankind has reached, the organizations and associations that cares about environment are trying to warn the world from the forthcoming danger resulted from excessive use of nature resources and consuming it without looking to the damage happened as a result of unfair use of it. Most of the urban centers suffers from the environmental pollution problems and health, economic, and social dangers resulted from this pollution, and while the land use planning is responsible for distributing different uses in urban centers and controlling the interactions between these uses to reach a homogeneous and perfect state for the different activities in cities, the occurrence of environmental problems in the shade of existing land use planning operation refers to the disorder or insufficiency in this operation which leads to presence of such problems, and this disorder lays in lack of sufficient importance to the environmental considerations during the land use planning operations and setting up the master plan, so the research start to study this problem and finding solutions for it, the research assumes that using accurate and scientific methods in early stages of land use planning operation will prevent occurring of environmental pollution problems in the future, the research aims to study and show the importance of the environmental impact assessment method (EIA) as an important planning tool to investigate and predict the pollution ranges of the land use that has a polluting pattern in land use planning operation. This research encompasses the concept of environmental assessment and its kinds and clarifies environmental impact assessment and its contents, the research also dealt with urban planning concept and land use planning, it also dealt with the current situation of the case study (Al-Garma district) and the land use planning in it and explain the most polluting use on the environment which is the industrial land use represented in the tannery industries and then there was a stating of current situation of this land use and explaining its contents and environmental impacts resulted from it, and then we analyzed the tests applied by the researcher for water and soil, and perform environmental evaluation through applying environmental impact assessment matrix using the direct method to reveal the pollution ranges on the ambient environment of industrial land use, and we also applied the environmental and site limits and standards by using (GIS) and (AUTOCAD) to select the site of the best alternative of the industrial region in Al-Garma district after the research approved the unsuitability of its current site location for the environmental and site limitations, the research conducted some conclusions and recommendations regard clarifying the concluded facts and to set the proper solutions.

Keywords: EIA, pollution, tannery industry, land use planning

Procedia PDF Downloads 440
701 Antimicrobial and Antibiofilm Properties of Fatty Acids Against Streptococcus Mutans

Authors: A. Mulry, C. Kealey, D. B. Brady

Abstract:

Planktonic bacteria can form biofilms which are microbial aggregates embedded within a matrix of extracellular polymeric substances (EPS). They can be found attached to abiotic or biotic surfaces. Biofilms are responsible for oral diseases such as dental caries, gingivitis and the progression of periodontal disease. Biofilms can resist 500 to 1000 times the concentration of biocides and antibiotics used to kill planktonic bacteria. Biofilm development on oral surfaces involves four stages, initial attachment, early development, maturation and dispersal of planktonic cells. The Minimum Inhibitory Concentration (MIC) was determined using a range of saturated and unsaturated fatty acids using the resazurin assay, followed by serial dilution and spot plating on BHI agar plates to establish the Minimum Bactericidal Concentration (MBC). Log reduction of bacteria was also evaluated for each fatty acid. The Minimum Biofilm Inhibition Concentration (MBIC) was determined using crystal violet assay in 96 well plates on forming and pre-formed S. mutans biofilms using BHI supplemented with 1% sucrose. Saturated medium-chain fatty acids Octanoic (C8.0), Decanoic (C10.0) and Undecanoic acid (C11.0) do not display strong antibiofilm properties; however, Lauric (C12.0) and Myristic (C14.0) display moderate antibiofilm properties with 97.83% and 97.5% biofilm inhibition with 1000 µM respectively. Monounsaturated, Oleic acid (C18.1) and polyunsaturated large chain fatty acids, Linoleic acid (C18.2) display potent antibiofilm properties with biofilm inhibition of 99.73% at 125 µM and 100% at 65.5 µM, respectively. Long-chain polyunsaturated Omega-3 fatty acids α-Linoleic (C18.3), Eicosapentaenoic Acid (EPA) (C20.5), Docosahexaenoic Acid (DHA) (C22.6) have displayed strong antibiofilm efficacy from concentrations ranging from 31.25-250µg/ml. DHA is the most promising antibiofilm agent with an MBIC of 99.73% with 15.625µg/ml. This may be due to the presence of six double bonds and the structural orientation of the fatty acid. To conclude, fatty acids displaying the most antimicrobial activity appear to be medium or long-chain unsaturated fatty acids containing one or more double bonds. Most promising agents include Omega-3-fatty acids Linoleic, α-Linoleic, EPA and DHA, as well as Omega-9 fatty acid Oleic acid. These results indicate that fatty acids have the potential to be used as antimicrobials and antibiofilm agents against S. mutans. Future work involves further screening of the most potent fatty acids against a range of bacteria, including Gram-positive and Gram-negative oral pathogens. Future work will involve incorporating the most effective fatty acids onto dental implant devices to prevent biofilm formation.

Keywords: antibiofilm, biofilm, fatty acids, S. mutans

Procedia PDF Downloads 138
700 Sintering of YNbO3:Eu3+ Compound: Correlation between Luminescence and Spark Plasma Sintering Effect

Authors: Veronique Jubera, Ka-Young Kim, U-Chan Chung, Amelie Veillere, Jean-Marc Heintz

Abstract:

Emitting materials and all solid state lasers are widely used in the field of optical applications and materials science as a source of excitement, instrumental measurements, medical applications, metal shaping etc. Recently promising optical efficiencies were recorded on ceramics which result from a cheaper and faster ways to obtain crystallized materials. The choice and optimization of the sintering process is the key point to fabricate transparent ceramics. It includes a high control on the preparation of the powder with the choice of an adequate synthesis, a pre-heat-treatment, the reproducibility of the sintering cycle, the polishing and post-annealing of the ceramic. The densification is the main factor needed to reach a satisfying transparency, and many technologies are now available. The symmetry of the unit cell plays a crucial role in the diffusion rate of the material. Therefore, the cubic symmetry compounds having an isotropic refractive index is preferred. The cubic Y3NbO7 matrix is an interesting host which can accept a high concentration of rare earth doping element and it has been demonstrated that SPS is an efficient way to sinter this material. The optimization of diffusion losses requires a microstructure of fine ceramics, generally less than one hundred nanometers. In this case, grain growth is not an obstacle to transparency. The ceramics properties are then isotropic thereby to free-shaping step by orienting the ceramics as this is the case for the compounds of lower symmetry. After optimization of the synthesis route, several SPS parameters as heating rate, holding, dwell time and pressure were adjusted in order to increase the densification of the Eu3+ doped Y3NbO7 pellets. The luminescence data coupled with X-Ray diffraction analysis and electronic diffraction microscopy highlight the existence of several distorted environments of the doping element in the studied defective fluorite-type host lattice. Indeed, the fast and high crystallization rate obtained to put in evidence a lack of miscibility in the phase diagram, being the final composition of the pellet driven by the ratio between niobium and yttrium elements. By following the luminescence properties, we demonstrate a direct impact on the SPS process on this material.

Keywords: emission, niobate of rare earth, Spark plasma sintering, lack of miscibility

Procedia PDF Downloads 247
699 Greek Teachers' Understandings of Typical Language Development and of Language Difficulties in Primary School Children and Their Approaches to Language Teaching

Authors: Konstantina Georgali

Abstract:

The present study explores Greek teachers’ understandings of typical language development and of language difficulties. Its core aim was to highlight that teachers need to have a thorough understanding of educational linguistics, that is of how language figures in education. They should also be aware of how language should be taught so as to promote language development for all students while at the same time support the needs of children with language difficulties in an inclusive ethos. The study, thus argued that language can be a dynamic learning mechanism in the minds of all children and a powerful teaching tool in the hands of teachers and provided current research evidence to show that structural and morphological particularities of native languages- in this case, of the Greek language- can be used by teachers to enhance children’s understanding of language and simultaneously improve oral language skills for children with typical language development and for those with language difficulties. The research was based on a Sequential Exploratory Mixed Methods Design deployed in three consecutive and integrative phases. The first phase involved 18 exploratory interviews with teachers. Its findings informed the second phase involving a questionnaire survey with 119 respondents. Contradictory questionnaire results were further investigated in a third phase employing a formal testing procedure with 60 children attending Y1, Y2 and Y3 of primary school (a research group of 30 language impaired children and a comparison group of 30 children with typical language development, both identified by their class teachers). Results showed both strengths and weaknesses in teachers’ awareness of educational linguistics and of language difficulties. They also provided a different perspective of children’s language needs and of language teaching approaches that reflected current advances and conceptualizations of language problems and opened a new window on how best they can be met in an inclusive ethos. However, teachers barely used teaching approaches that could capitalize on the particularities of the Greek language to improve language skills for all students in class. Although they seemed to realize the importance of oral language skills and their knowledge base on language related issues was adequate, their practices indicated that they did not see language as a dynamic teaching and learning mechanism that can promote children’s language development and in tandem, improve academic attainment. Important educational implications arose and clear indications of the generalization of findings beyond the Greek educational context.

Keywords: educational linguistics, inclusive ethos, language difficulties, typical language development

Procedia PDF Downloads 370
698 Limbic Involvement in Visual Processing

Authors: Deborah Zelinsky

Abstract:

The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.

Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing

Procedia PDF Downloads 68
697 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model

Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis

Abstract:

Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.

Keywords: artery, drug, nanoparticles, navigation

Procedia PDF Downloads 98
696 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams

Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche

Abstract:

According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.

Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor

Procedia PDF Downloads 307
695 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis

Authors: Jui-Teng Liao

Abstract:

The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.

Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format

Procedia PDF Downloads 66