Search results for: varying an axial load
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4719

Search results for: varying an axial load

189 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 16
188 Molecular Characterization and Arsenic Mobilization Properties of a Novel Strain IIIJ3-1 Isolated from Arsenic Contaminated Aquifers of Brahmaputra River Basin, India

Authors: Soma Ghosh, Balaram Mohapatra, Pinaki Sar, Abhijeet Mukherjee

Abstract:

Microbial role in arsenic (As) mobilization in the groundwater aquifers of Brahmaputra river basin (BRB) in India, severely threatened by high concentrations of As, remains largely unknown. The present study, therefore, is a molecular and ecophysiological characterization of an indigenous bacterium strain IIIJ3-1 isolated from As contaminated groundwater of BRB and application of this strain in several microcosm set ups differing in their organic carbon (OC) source and terminal electron acceptors (TEA), to understand its role in As dissolution under aerobic and anaerobic conditions. Strain IIIJ3-1 was found to be a new facultative anaerobic, gram-positive, endospore-forming strain capable of arsenite (As3+) oxidation and dissimilatory arsenate (As5+) reduction. The bacterium exhibited low genomic (G+C)% content (45 mol%). Although, its 16S rRNA gene sequence revealed a maximum similarity of 99% with Bacillus cereus ATCC 14579(T) but the DNA-DNA relatedness of their genomic DNAs was only 49.9%, which remains well below the value recommended to delimit different species. Abundance of fatty acids iC17:0, iC15:0 and menaquinone (MK) 7 though corroborates its taxonomic affiliation with B. cereus sensu-lato group, presence of hydroxy fatty acids (HFAs), C18:2, MK5 and MK6 marked its uniqueness. Besides being highly As resistant (MTC=10mM As3+, 350mM As5+), metabolically diverse, efficient aerobic As3+ oxidizer; it exhibited near complete dissimilatory reduction of As5+ (1 mM). Utilization of various carbon sources with As5+ as TEA revealed lactate to serve as the best electron donor. Aerobic biotransformation assay yielded a lower Km for As3+ oxidation than As5+ reduction. Arsenic homeostasis was found to be conferred by the presence of arr, arsB, aioB, and acr3(1) genes. Scanning electron microscopy (SEM) coupled with energy dispersive X-ray (EDX) analysis of this bacterium revealed reduction in cell size upon exposure to As and formation of As-rich electron opaque dots following growth with As3+. Incubation of this strain with sediment (sterilised) collected from BRB aquifers under varying OC, TEA and redox conditions revealed that the strain caused highest As mobilization from solid to aqueous phase under anaerobic condition with lactate and nitrate as electron donor and acceptor, respectively. Co-release of highest concentrations of oxalic acid, a well known bioweathering agent, considerable fold increase in viable cell counts and SEM-EDX and X-ray diffraction analysis of the sediment after incubation under this condition indicated that As release is consequent to microbial bioweathering of the minerals. Co-release of other elements statistically proves decoupled release of As with Fe and Zn. Principle component analysis also revealed prominent role of nitrate under aerobic and/or anaerobic condition in As release by strain IIIJ3-1. This study, therefore, is the first to isolate, characterize and reveal As mobilization property of a strain belonging to the Bacillus cereus sensu lato group isolated from highly As contaminated aquifers of Brahmaputra River Basin.

Keywords: anaerobic microcosm, arsenic rich electron opaque dots, Arsenic release, Bacillus strain IIIJ3-1

Procedia PDF Downloads 108
187 Design Development and Qualification of a Magnetically Levitated Blower for C0₂ Scrubbing in Manned Space Missions

Authors: Larry Hawkins, Scott K. Sakakura, Michael J. Salopek

Abstract:

The Marshall Space Flight Center is designing and building a next-generation CO₂ removal system, the Four Bed Carbon Dioxide Scrubber (4BCO₂), which will use the International Space Station (ISS) as a testbed. The current ISS CO2 removal system has faced many challenges in both performance and reliability. Given that CO2 removal is an integral Environmental Control and Life Support System (ECLSS) subsystem, the 4BCO2 Scrubber has been designed to eliminate the shortfalls identified in the current ISS system. One of the key required upgrades was to improve the performance and reliability of the blower that provides the airflow through the CO₂ sorbent beds. A magnetically levitated blower, capable of higher airflow and pressure than the previous system, was developed to meet this need. The design and qualification testing of this next-generation blower are described here. The new blower features a high-efficiency permanent magnet motor, a five-axis, active magnetic bearing system, and a compact controller containing both a variable speed drive and a magnetic bearing controller. The blower uses a centrifugal impeller to pull air from the inlet port and drive it through an annular space around the motor and magnetic bearing components to the exhaust port. Technical challenges of the blower and controller development include survival of the blower system under launch random vibration loads, operation in microgravity, packaging under strict size and weight requirements, and successful operation during 4BCO₂ operational changeovers. An ANSYS structural dynamic model of the controller was used to predict response to the NASA defined random vibration spectrum and drive minor design changes. The simulation results are compared to measurements from qualification testing the controller on a vibration table. Predicted blower performance is compared to flow loop testing measurements. Dynamic response of the system to valve changeovers is presented and discussed using high bandwidth measurements from dynamic pressure probes, magnetic bearing position sensors, and actuator coil currents. The results presented in the paper show that the blower controller will survive launch vibration levels, the blower flow meets the requirements, and the magnetic bearings have adequate load capacity and control bandwidth to maintain the desired rotor position during the valve changeover transients.

Keywords: blower, carbon dioxide removal, environmental control and life support system, magnetic bearing, permanent magnet motor, validation testing, vibration

Procedia PDF Downloads 107
186 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy

Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon

Abstract:

Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).

Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect

Procedia PDF Downloads 158
185 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions

Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra

Abstract:

In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.

Keywords: aerosol, CFD, deposition, coagulation

Procedia PDF Downloads 121
184 Synthesis by Mechanical Alloying and Characterization of FeNi₃ Nanoalloys

Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz

Abstract:

There is a growing interest on the synthesis and characterization of nanoalloys since the unique chemical, and physical properties of nanoalloys can be tuned and, consequently, new structural motifs can be created by varying the type of constituent elements, atomic and magnetic ordering, as well as size and shape of the nanoparticles. Due to the fine size effects, magnetic nanoalloys have considerable attention with their enhanced mechanical, electrical, optical and magnetic behavior. As an important magnetic nanoalloy, the novel application area of Fe-Ni based nanoalloys is expected to be widened in the chemical, aerospace industry and magnetic biomedical applications. Noble metals have been using in biomedical applications for several years because of their surface plasmon properties. In this respect, iron-nickel nanoalloys are promising materials for magnetic biomedical applications because they show novel properties such as superparamagnetism and surface plasmon resonance property. Also, there is great attention for the usage Fe-Ni based nanoalloys as radar absorbing materials in aerospace and stealth industry due to having high Curie temperature, high permeability and high saturation magnetization with good thermal stability. In this study, FeNi₃ bimetallic nanoalloys were synthesized by mechanical alloying in a planetary high energy ball mill. In mechanical alloying, micron size powders are placed into the mill with milling media. The powders are repeatedly deformed, fractured and alloyed by high energy collision under the impact of balls until the desired composition and particle size is achieved. The experimental studies were carried out in two parts. Firstly, dry mechanical alloying with high energy dry planetary ball milling was applied to obtain FeNi₃ nanoparticles. Secondly, dry milling was followed by surfactant-assisted ball milling to observe the surfactant and solvent effect on the structure, size, and properties of the FeNi₃ nanoalloys. In the first part, the powder sample of iron-nickel was prepared according to the 1:3 iron to nickel ratio to produce FeNi₃ nanoparticles and the 1:10 powder to ball weight ratio. To avoid oxidation during milling, the vials had been filled with Ar inert gas before milling started. The powders were milled for 80 hours in total and the synthesis of the FeNi₃ intermetallic nanoparticles was succeeded by mechanical alloying in 40 hours. Also, regarding the particle size, it was found that the amount of nano-sized particles raised with increasing milling time. In the second part of the study, dry milling of the Fe and Ni powders with the same stoichiometric ratio was repeated. Then, to prevent agglomeration and to obtain smaller sized nanoparticles with superparamagnetic behavior, surfactants and solvent are added to the system, after 40-hour milling time, with the completion of the mechanical alloying. During surfactant-assisted ball milling, heptane was used as milling medium, and as surfactants, oleic acid and oleylamine were used in the high energy ball milling processes. The characterization of the alloyed particles in terms of microstructure, morphology, particle size, thermal and magnetic properties with respect to milling time was done by X-ray diffraction, scanning electron microscopy, energy dispersive spectroscopy, vibrating-sample magnetometer, and differential scanning calorimetry.

Keywords: iron-nickel systems, magnetic nanoalloys, mechanical alloying, nanoalloy characterization, surfactant-assisted ball milling

Procedia PDF Downloads 154
183 Antibiotic Prophylaxis Habits in Oral Implant Surgery in the Netherlands: A Cross-Sectional Survey

Authors: Fabio Rodriguez Sanchez, Josef Bruers, Iciar Arteagoitia, Carlos Rodriguez Andres

Abstract:

Background: Oral implants are a routine treatment to replace lost teeth. Although they have a high rate of success, implant failures do occur. Perioperative antibiotics have been suggested to prevent postoperative infections and dental implant failures, but they remain a controversial treatment among healthy patients. The objective of this study was to determine whether antibiotic prophylaxis is a common treatment in the Netherlands among general dentists, maxillofacial-surgeons, periodontists and implantologists in conjunction with oral implant surgery among healthy patients and to assess the nature of antibiotics prescriptions in order to evaluate whether any consensus has been reached and the current recommendations are being followed. Methodology: Observational cross-sectional study based on a web-survey reported according to the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. A validated questionnaire, developed by Deeb et al. (2015), was translated and slightly adjusted to circumstances in the Netherlands. It was used with the explicit permission of the authors. This questionnaire contained both close-ended and some open-ended questions in relation to the following topics: demographics, qualification, antibiotic type, prescription-duration and dosage. An email was sent February 2018 to a sample of 600 general dentists and all 302 oral implantologists, periodontists and maxillofacial surgeons who were recognized by the Dutch Association of Oral Implantology (NVOI) as oral health care providers placing oral implants. The email included a brief introduction about the study objectives and a link to the web questionnaire, which could be filled in anonymously. Overall, 902 questionnaires were sent. However, 29 questionnaires were not correctly received due to an incorrect email address. So a total number of 873 professionals were reached. Collected data were analyzed using SPSS (IBM Corp., released 2012, Armonk, NY). Results: The questionnaire was sent back by a total number of 218 participants (response rate=24.2%), 45 female (20.8%) and 171 male (79.2%). Two respondents were excluded from the study group because they were not currently working as oral health providers. Overall 151 (69.9%) placed oral implants on regular basis. Approximately 79 (52.7%) of these participants prescribed antibiotics only in determined situations, 66 (44.0%) prescribed antibiotics always and 5 dentists (3.3%) did not prescribe antibiotics at all when placing oral implants. Overall, 83 participants who prescribed antibiotics, did so both pre- and postoperatively (58.5%), 12 exclusively postoperative (8.5%), and 47 followed an exclusive preoperative regime (33.1%). A single dose of 2,000 mg amoxicillin orally 1-hour prior treatment was the most prescribed preoperative regimen. The most frequent prescribed postoperative regimen was 500 mg amoxicillin three times daily for 7 days after surgery. On average, oral health professionals prescribed 6,923 mg antibiotics in conjunction with oral implant surgery, varying from 500 to 14,600 mg. Conclusions: Antibiotic prophylaxis in conjunction with oral implant surgery is prescribed in the Netherlands on a rather large scale. Dutch professionals might prescribe antibiotics more cautiously than in other countries and there seems to be a lower range on the different antibiotic types and regimens being prescribed. Anyway, recommendations based on last-published evidence are frequently not being followed.

Keywords: clinical decision making, infection control, antibiotic prophylaxis, dental implants

Procedia PDF Downloads 121
182 Distribution, Source Apportionment and Assessment of Pollution Level of Trace Metals in Water and Sediment of a Riverine Wetland of the Brahmaputra Valley

Authors: Kali Prasad Sarma, Sanghita Dutta

Abstract:

Deepor Beel (DB), the lone Ramsar site and an important wetland of the Brahmaputra valley in the state of Assam. The local people from fourteen peripheral villages traditionally utilize the wetland for harvesting vegetables, flowers, aquatic seeds, medicinal plants, fish, molluscs, fodder for domestic cattle etc. Therefore, it is of great importance to understand the concentration and distribution of trace metals in water-sediment system of the beel in order to protect its ecological environment. DB lies between26°05′26′′N to 26°09′26′′N latitudes and 90°36′39′′E to 91°41′25′′E longitudes. Water samples from the surface layer of water up to 40cm deep and sediment samples from the top 5cm layer of surface sediments were collected. The trace metals in waters and sediments were analysed using ICP-OES. The organic Carbon was analysed using the TOC analyser. The different mineral present in the sediments were confirmed by X-ray diffraction method (XRD). SEM images were recorded for the samples using SEM, attached with energy dispersive X-ray unit, with an accelerating voltage of 20 kv. All the statistical analyses were performed using SPSS20.0 for windows. In the present research, distribution, source apportionment, temporal and spatial variability, extent of pollution and the ecological risk of eight toxic trace metals in sediments and water of DB were investigated. The average concentrations of chromium(Cr) (both the seasons), copper(Cu) and lead(Pb) (pre-monsoon) and zinc(Zn) and cadmium(Cd) (post-monsoon) in sediments were higher than the consensus based threshold concentration(TEC). The persistent exposure of toxic trace metals in sediments pose a potential threat, especially to sediment dwelling organisms. The degree of pollution in DB sediments for Pb, Cobalt (Co) Zn, Cd, Cr, Cu and arsenic (As) was assessed using Enrichment Factor (EF), Geo-accumulation index (Igeo) and Pollution Load Index (PLI). The results indicated that contamination of surface sediments in DB is dominated by Pb and Cd and to a lesser extent by Co, Fe, Cu, Cr, As and Zn. A significant positive correlation among the pairs of element Co/Fe, Zn/As in water, and Cr/Zn, Fe/As in sediments indicates similar source of origin of these metals. The effects of interaction among trace metals between water and sediments shows significant variations (F =94.02, P < 0.001), suggesting maximum mobility of trace metals in DB sediments and water. The source apportionment of the heavy metals was carried out using Principal Component Analysis (PCA). SEM-EDS detects the presence of Cd, Cu, Cr, Zn, Pb, As and Fe in the sediment sample. The average concentration of Cd, Zn, Pb and As in the bed sediments of DB are found to be higher than the crustal abundance. The EF values indicate that Cd and Pb are significantly enriched. From source apportionment studies of the eight metals using PCA revealed that Cd was anthropogenic in origin; Pb, As, Cr, and Zn had mixed sources; whereas Co, Cu and Fe were natural in origin.

Keywords: Deepor Beel, enrichment factor, principal component analysis, trace metals

Procedia PDF Downloads 271
181 Exploitation Pattern of Atlantic Bonito in West African Waters: Case Study of the Bonito Stock in Senegalese Waters

Authors: Ousmane Sarr

Abstract:

The Senegalese coasts have high productivity of fishery resources due to the frequency of intense up-welling system that occurs along its coast, caused by the maritime trade winds making its waters nutrients rich. Fishing plays a primordial role in Senegal's socioeconomic plans and food security. However, a global diagnosis of the Senegalese maritime fishing sector has highlighted the challenges this sector encounters. Among these concerns, some significant stocks, a priority target for artisanal fishing, need further assessment. If no efforts are made in this direction, most stock will be overexploited or even in decline. It is in this context that this research was initiated. This investigation aimed to apply a multi-modal approach (LBB, Catch-only-based CMSY model and its most recent version (CMSY++); JABBA, and JABBA-Select) to assess the stock of Atlantic bonito, Sarda sarda (Bloch, 1793) in the Senegalese Exclusive Economic Zone (SEEZ). Available catch, effort, and size data from Atlantic bonito over 15 years (2004-2018) were used to calculate the nominal and standardized CPUE, size-frequency distribution, and length at retentions (50 % and 95 % selectivity) of the species. These relevant results were employed as input parameters for stock assessment models mentioned above to define the stock status of this species in this region of the Atlantic Ocean. The LBB model indicated an Atlantic bonito healthy stock status with B/BMSY values ranging from 1.3 to 1.6 and B/B0 values varying from 0.47 to 0.61 of the main scenarios performed (BON_AFG_CL, BON_GN_Length, and BON_PS_Length). The results estimated by LBB are consistent with those obtained by CMSY. The CMSY model results demonstrate that the SEEZ Atlantic bonito stock is in a sound condition in the final year of the main scenarios analyzed (BON, BON-bt, BON-GN-bt, and BON-PS-bt) with sustainable relative stock biomass (B2018/BMSY = 1.13 to 1.3) and fishing pressure levels (F2018/FMSY= 0.52 to 1.43). The B/BMSY and F/FMSY results for the JABBA model ranged between 2.01 to 2.14 and 0.47 to 0.33, respectively. In contrast, The estimated B/BMSY and F/FMSY for JABBA-Select ranged from 1.91 to 1.92 and 0.52 to 0.54. The Kobe plots results of the base case scenarios ranged from 75% to 89% probability in the green area, indicating sustainable fishing pressure and an Atlantic bonito healthy stock size capable of producing high yields close to the MSY. Based on the stock assessment results, this study highlighted scientific advice for temporary management measures. This study suggests an improvement of the selectivity parameters of longlines and purse seines and a temporary prohibition of the use of sleeping nets in the fishery for the Atlantic bonito stock in the SEEZ based on the results of the length-base models. Although these actions are temporary, they can be essential to reduce or avoid intense pressure on the Atlantic bonito stock in the SEEZ. However, it is necessary to establish harvest control rules to provide coherent and solid scientific information that leads to appropriate decision-making for rational and sustainable exploitation of Atlantic bonito in the SEEZ and the Eastern Atlantic Ocean.

Keywords: multi-model approach, stock assessment, atlantic bonito, SEEZ

Procedia PDF Downloads 44
180 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction

Authors: Alisawi Alaa T., Collins P. E. F.

Abstract:

The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.

Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard

Procedia PDF Downloads 73
179 Comparison of Microstructure, Mechanical Properties and Residual Stresses in Laser and Electron Beam Welded Ti–5Al–2.5Sn Titanium Alloy

Authors: M. N. Baig, F. N. Khan, M. Junaid

Abstract:

Titanium alloys are widely employed in aerospace, medical, chemical, and marine applications. These alloys offer many advantages such as low specific weight, high strength to weight ratio, excellent corrosion resistance, high melting point and good fatigue behavior. These attractive properties make titanium alloys very unique and therefore they require special attention in all areas of processing, especially welding. In this work, 1.6 mm thick sheets of Ti-5Al-2,5Sn, an alpha titanium (α-Ti) alloy, were welded using electron beam (EBW) and laser beam (LBW) welding processes to achieve a full penetration Bead-on Plate (BoP) configuration. The weldments were studied using polarized optical microscope, SEM, EDS and XRD. Microhardness distribution across the weld zone and smooth and notch tensile strengths of the weldments were also recorded. Residual stresses using Hole-drill Strain Measurement (HDSM) method and deformation patterns of the weldments were measured for the purpose of comparison of the two welding processes. Fusion zone widths of both EBW and LBW weldments were found to be approximately equivalent owing to fairly similar high power densities of both the processes. Relatively less oxide content and consequently high joint quality were achieved in EBW weldment as compared to LBW due to vacuum environment and absence of any shielding gas. However, an increase in heat-affected zone width and partial ά-martensitic transformation infusion zone of EBW weldment were observed because of lesser cooling rates associated with EBW as compared with LBW. The microstructure infusion zone of EBW weldment comprised both acicular α and ά martensite within the prior β grains whereas complete ά martensitic transformation was observed within the fusion zone of LBW weldment. Hardness of the fusion zone in EBW weldment was found to be lower than the fusion zone of LBW weldment due to the observed microstructural differences. Notch tensile specimen of LBW exhibited higher load capacity, ductility, and absorbed energy as compared with EBW specimen due to the presence of high strength ά martensitic phase. It was observed that the sheet deformation and deformation angle in EBW weldment were more than LBW weldment due to relatively more heat retention in EBW which led to more thermal strains and hence higher deformations and deformation angle. The lowest residual stresses were found in LBW weldments which were tensile in nature. This was owing to high power density and higher cooling rates associated with LBW process. EBW weldment exhibited highest compressive residual stresses due to which the service life of EBW weldment is expected to improve.

Keywords: Laser and electron beam welding, Microstructure and mechanical properties, Residual stress and distortions, Titanium alloys

Procedia PDF Downloads 197
178 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling

Authors: Mohammed El Raey, Moustafa Osman Mohammed

Abstract:

The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.

Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology

Procedia PDF Downloads 54
177 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing

Authors: Arjun Kumar Rath, Titus Dhanasingh

Abstract:

Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.

Keywords: board diagnostics software, embedded system, hardware testing, test frameworks

Procedia PDF Downloads 120
176 Exploring the Cultural Values of Nursing Personnel Utilizing Hofstede's Cultural Dimensions

Authors: Ma Chu Jui

Abstract:

Culture plays a pivotal role in shaping societal responses to change and fostering adaptability. In the realm of healthcare provision, hospitals serve as dynamic settings molded by the cultural consciousness of healthcare professionals. This intricate interplay extends to their expectations of leadership, communication styles, and attitudes towards patient care. Recognizing the cultural inclinations of healthcare professionals becomes imperative in navigating this complex landscape. This study will utilize Hofstede's Value Survey Module 2013 (VSM 2013) as a comprehensive analytical tool. The targeted participants for this research are in-service nursing professionals with a tenure of at least three months, specifically employed in the nursing department of an Eastern hospital. This quantitative approach seeks to quantify diverse cultural tendencies among the targeted nursing professionals, elucidating not only abstract cultural concepts but also revealing their cultural inclinations across different dimensions. The study anticipates gathering between 400 to 500 responses, ensuring a robust dataset for a comprehensive analysis. The focused approach on nursing professionals within the Eastern hospital setting enhances the relevance and specificity of the cultural insights obtained. The research aims to contribute valuable knowledge to the understanding of cultural tendencies among in-service nursing personnel in the nursing department of this specific Eastern hospital. The VSM 2013 will be initially distributed to this specific group to collect responses, aiming to calculate scores on each of Hofstede's six cultural dimensions—Power Distance Index (PDI), Individualism vs. Collectivism (IDV), Uncertainty Avoidance Index (UAI), Masculinity vs. Femininity (MAS), Long-Term Orientation vs. Short-Term Normative Orientation (LTO), and Indulgence vs. Restraint (IVR). the study unveils a significant correlation between different cultural dimensions and healthcare professionals' tendencies in understanding leadership expectations through PDI, grasping behavioral patterns via IDV, acknowledging risk acceptance through UAI, and understanding their long-term and short-term behaviors through LTO. These tendencies extend to communication styles and attitudes towards patient care. These findings provide valuable insights into the nuanced interconnections between cultural factors and healthcare practices. Through a detailed analysis of the varying levels of these cultural dimensions, we gain a comprehensive understanding of the predominant inclinations among the majority of healthcare professionals. This nuanced perspective adds depth to our comprehension of how cultural values shape their approach to leadership, communication, and patient care, contributing to a more holistic understanding of the healthcare landscape. A profound comprehension of the cultural paradigms embraced by healthcare professionals holds transformative potential. Beyond a mere understanding, it acts as a catalyst for elevating the caliber of healthcare services. This heightened awareness fosters cohesive collaboration among healthcare teams, paving the way for the establishment of a unified healthcare ethos. By cultivating shared values, our study envisions a healthcare environment characterized by enhanced quality, improved teamwork, and ultimately, a more favorable and patient-centric healthcare landscape. In essence, our research underscores the critical role of cultural awareness in shaping the future of healthcare delivery.

Keywords: hofstede's cultural, cultural dimensions, cultural values in healthcare, cultural awareness in nursing

Procedia PDF Downloads 37
175 Bee Keeping for Human-Elephant Conflict Mitigation: A Success Story for Sustainable Tourism in Kibale National Park, Western Uganda

Authors: Dorothy Kagazi

Abstract:

The African elephant (Loxodonta africana) remains one of the most crop-damaging species around Kibale National Park, western Uganda. Elephant crop raiding deprives communities of food and incomes, consequently impacting livelihoods, attitude, and support for conservation. It also attracts an aggressive reaction from local communities including the retaliatory killing of a species that is already endangered and listed under Appendix I of the Convention on Endangered Species of Flora and Fauna (CITES). In order to mitigate against elephant crop raiding and minimize conflict, a number of interventions were devised by the government of Uganda such as physical guarding, scare-shooting, excavation of trenches, growing of unpalatable crops and fire lighting all of which have over the years been implemented around the park. These generated varying degrees of effectiveness but largely never solved the problem of elephants crossing into communities to destroy food and shelter which had a negative effect onto sustainable tourism of the communities who often resorted to killing these animals and hence contributing the falling numbers of these animals. It was until government discovered that there are far more effective ways of deterring these animals from crossing to communities that it commissioned a study to deploy the African honeybee (Apis mellifera scutellata) as a deterrent against elephant crop raiding and income enhancement for local people around the park. These efforts led to a number of projects around Kibale National Park where communities were facilitated to keep bees for human-elephant conflict mitigation and rural income enhancement through the sale of honey. These projects have registered tremendous success in reducing crop damage, enhance rural incomes, influence positive attitude change and ultimately secure community support for elephant and park conservation which is a clear manifestation of sustainable tourism development in the area. To address the issue of sustainability, the project was aligned with four major objectives that contributed to the overall goal of maintaining the areas around the parks and the national park itself in such a manner that it remains viable over an infinite period. Among these included determining deterrence effects of bees against elephant crop raiding, assessing the contribution of beekeeping towards rural income enhancement, determining the impact of community involvement of park conservation and management among others. The project deployed 500 improved hives by placing them at specific and previously identified and mapped out elephant crossing points along the park boundary. A control site was established without any intervention to facilitate comparison of findings and data was collected on elephant raiding frequency, patterns, honey harvested, and community attitude towards the park. A socio-economic assessment was also undertaken to ascertain the contribution of beekeeping to incomes and attitude change. In conclusion, human-wildlife conflicts have disturbed conservation and sustainable tourism development efforts. Such success stories like the beekeeping strategy should hence be extensively discussed and widely shared as a conservation technique for sustainable tourism.

Keywords: bees, communities, conservation, elephants

Procedia PDF Downloads 184
174 Biosensor: An Approach towards Sustainable Environment

Authors: Purnima Dhall, Rita Kumar

Abstract:

Introduction: River Yamuna, in the national capital territory (NCT), and also the primary source of drinking water for the city. Delhi discharges about 3,684 MLD of sewage through its 18 drains in to the Yamuna. Water quality monitoring is an important aspect of water management concerning to the pollution control. Public concern and legislation are now a day’s demanding better environmental control. Conventional method for estimating BOD5 has various drawbacks as they are expensive, time-consuming, and require the use of highly trained personnel. Stringent forthcoming regulations on the wastewater have necessitated the urge to develop analytical system, which contribute to greater process efficiency. Biosensors offer the possibility of real time analysis. Methodology: In the present study, a novel rapid method for the determination of biochemical oxygen demand (BOD) has been developed. Using the developed method, the BOD of a sample can be determined within 2 hours as compared to 3-5 days with the standard BOD3-5day assay. Moreover, the test is based on specified consortia instead of undefined seeding material therefore it minimizes the variability among the results. The device is coupled to software which automatically calculates the dilution required, so, the prior dilution of the sample is not required before BOD estimation. The developed BOD-Biosensor makes use of immobilized microorganisms to sense the biochemical oxygen demand of industrial wastewaters having low–moderate–high biodegradability. The method is quick, robust, online and less time consuming. Findings: The results of extensive testing of the developed biosensor on drains demonstrate that the BOD values obtained by the device correlated with conventional BOD values the observed R2 value was 0.995. The reproducibility of the measurements with the BOD biosensor was within a percentage deviation of ±10%. Advantages of developed BOD biosensor • Determines the water pollution quickly in 2 hours of time; • Determines the water pollution of all types of waste water; • Has prolonged shelf life of more than 400 days; • Enhanced repeatability and reproducibility values; • Elimination of COD estimation. Distinctiveness of Technology: • Bio-component: can determine BOD load of all types of waste water; • Immobilization: increased shelf life > 400 days, extended stability and viability; • Software: Reduces manual errors, reduction in estimation time. Conclusion: BiosensorBOD can be used to measure the BOD value of the real wastewater samples. The BOD biosensor showed good reproducibility in the results. This technology is useful in deciding treatment strategies well ahead and so facilitating discharge of properly treated water to common water bodies. The developed technology has been transferred to M/s Forbes Marshall Pvt Ltd, Pune.

Keywords: biosensor, biochemical oxygen demand, immobilized, monitoring, Yamuna

Procedia PDF Downloads 254
173 The Regulation of the Cancer Epigenetic Landscape Lies in the Realm of the Long Non-coding RNAs

Authors: Ricardo Alberto Chiong Zevallos, Eduardo Moraes Rego Reis

Abstract:

Pancreatic adenocarcinoma (PDAC) patients have a less than 10% 5-year survival rate. PDAC has no defined diagnostic and prognostic biomarkers. Gemcitabine is the first-line drug in PDAC and several other cancers. Long non-coding RNAs (lncRNAs) contribute to the tumorigenesis and are potential biomarkers for PDAC. Although lncRNAs aren’t translated into proteins, they have important functions. LncRNAs can decoy or recruit proteins from the epigenetic machinery, act as microRNA sponges, participate in protein translocation through different cellular compartments, and even promote chemoresistance. The chromatin remodeling enzyme EZH2 is a histone methyltransferase that catalyzes the methylation of histone 3 at lysine 27, silencing local expression. EZH2 is ambivalent, it can also activate gene expression independently of its histone methyltransferase activity. EZH2 is overexpressed in several cancers and interacts with lncRNAs, being recruited to a specific locus. EZH2 can be recruited to activate an oncogene or silence a tumor suppressor. The lncRNAs misregulation in cancer can result in the differential recruitment of EZH2 and in a distinct epigenetic landscape, promoting chemoresistance. The relevance of the EZH2-lncRNAs interaction to chemoresistant PDAC was assessed by Real Time quantitative PCR (RT-qPCR) and RNA Immunoprecipitation (RIP) experiments with naïve and gemcitabine-resistant PDAC cells. The expression of several lncRNAs and EZH2 gene targets was evaluated contrasting naïve and resistant cells. Selection of candidate genes was made by bioinformatic analysis and literature curation. Indeed, the resistant cell line showed higher expression of chemoresistant-associated lncRNAs and protein coding genes. RIP detected lncRNAs interacting with EZH2 with varying intensity levels in the cell lines. During RIP, the nuclear fraction of the cells was incubated with an antibody for EZH2 and with magnetic beads. The RNA precipitated with the beads-antibody-EZH2 complex was isolated and reverse transcribed. The presence of candidate lncRNAs was detected by RT-qPCR, and the enrichment was calculated relative to INPUT (total lysate control sample collected before RIP). The enrichment levels varied across the several lncRNAs and cell lines. The EZH2-lncRNA interaction might be responsible for the regulation of chemoresistance-associated genes in multiple cancers. The relevance of the lncRNA-EZH2 interaction to PDAC was assessed by siRNA knockdown of a lncRNA, followed by the analysis of the EZH2 target expression by RT-qPCR. The chromatin immunoprecipitation (ChIP) of EZH2 and H3K27me3 followed by RT-qPCR with primers for EZH2 targets also assess the specificity of the EZH2 recruitment by the lncRNA. This is the first report of the interaction of EZH2 and lncRNAs HOTTIP and PVT1 in chemoresistant PDAC. HOTTIP and PVT1 were described as promoting chemoresistance in several cancers, but the role of EZH2 is not clarified. For the first time, the lncRNA LINC01133 was detected in a chemoresistant cancer. The interaction of EZH2 with LINC02577, LINC00920, LINC00941, and LINC01559 have never been reported in any context. The novel lncRNAs-EZH2 interactions regulate chemoresistant-associated genes in PDAC and might be relevant to other cancers. Therapies targeting EZH2 alone weren’t successful, and a combinatorial approach also targeting the lncRNAs interacting with it might be key to overcome chemoresistance in several cancers.

Keywords: epigenetics, chemoresistance, long non-coding RNAs, pancreatic cancer, histone modification

Procedia PDF Downloads 66
172 The in Vitro and in Vivo Antifungal Activity of Terminalia Mantaly on Aspergillus Species Using Drosophila melanogaster (UAS-Diptericin) As a Model

Authors: Ponchang Apollos Wuyep, Alice Njolke Mafe, Longchi Satkat Zacheaus, Dogun Ojochogu, Dabot Ayuba Yakubu

Abstract:

Fungi causes huge losses when infections occur both in plants and animals. Synthetic Antifungal drugs are mostly very expensive and highly cytotoxic when taken. This study was aimed at determining the in vitro and in vivo antifungal activities of the leaves and stem extracts of Terminalia mantaly (Umbrella tree)H. Perrier on Aspergillus species in a bid to identify potential sources of cheap starting materials for the synthesis of new drugs to address the growing antimicrobial resistance. T. mantaly leave and stem powdered plant was extracted by fractionation using the method of solvent partition co-efficient in their graded form in the order n-hexane, Ethyl acetate, methanol and distilled water and phytochemical screening of each fraction revealed the presence of alkaloids, saponins, Tannins, flavonoids, carbohydrates, steroids, anthraquinones, cardiac glycosides and terpenoids in varying degrees. The Agar well diffusion technique was used to screen for antifungal activity of the fractions on clinical isolates of Aspergillus species (Aspergillus flavus and Aspergillus fumigatus). Minimum inhibitory concentration (MIC50) of the most active extracts was determined by the broth dilution method. The fractions test indicated a high antifungal activity with zones of inhibition ranging from 6 to 26 mm and 8 to 30mm (leave fractions) and 10mm to 34mm and 14mm to36mm (stem fractions) on A. flavus and A. fumigatus respectively. All the fractions indicated antifungal activity in a dose response relationship at concentrations of 62.5mg/ml, 125mg/ml, 250mg/ml and 500mg/ml. Better antifungal efficacy was shown by the Ethyl acetate, Hexane and Methanol fractions in the in vitro as the most potent fraction with MIC ranging from 62.5 to 125mg/ml. There was no statistically significant difference (P>0.05) in the potency of the Eight fractions from leave and stem (Hexane, Ethyl acetate, methanol and distilled water, antifungal (fluconazole), which served as positive control and 10% DMSO(Dimethyl Sulfoxide)which served as negative control. In the in vivo investigations, the ingestion technique was used for the infectious studies Female Drosophilla melanogaster(UAS-Diptericin)normal flies(positive control),infected and not treated flies (negative control) and infected flies with A. fumigatus and placed on normal diet, diet containing fractions(MSM and HSM each at concentrations of 10mg/ml 20mg/ml, 30mg/ml, 40mg/ml, 50mg/ml, 60mg/ml, 70mg/ml, 80mg/ml, 90mg/ml and 100mg/ml), diet containing control drugs(fluconazole as positive control)and infected flies on normal diet(negative control), the flies were observed for fifteen(15) days. Then the total mortality of flies was recorded each day. The results of the study reveals that the flies were susceptible to infection with A. fumigatus and responded to treatment with more effectiveness at 50mg/ml, 60mg/ml and 70mg/ml for both the Methanol and Hexane stem fractions. Therefore, the Methanol and Hexane stem fractions of T. mantaly contain therapeutically useful compounds, justifying the traditional use of this plant for the treatment of fungal infections.

Keywords: Terminalia mantaly, Aspergillus fumigatus, cytotoxic, Drosophila melanogaster, antifungal

Procedia PDF Downloads 56
171 Assessing Image Quality in Mobile Radiography: A Phantom-Based Evaluation of a New Lightweight Mobile X-Ray Equipment

Authors: May Bazzi, Shafik Tokmaj, Younes Saberi, Mats Geijer, Tony Jurkiewicz, Patrik Sund, Anna Bjällmark

Abstract:

Mobile radiography, employing portable X-ray equipment, has become a routine procedure within hospital settings, with chest X-rays in intensive care units standing out as the most prevalent mobile X-ray examinations. This approach is not limited to hospitals alone, as it extends its benefits to imaging patients in various settings, particularly those too frail to be transported, such as elderly care residents in nursing homes. Moreover, the utility of mobile X-ray isn't confined solely to traditional healthcare recipients; it has proven to be a valuable resource for vulnerable populations, including the homeless, drug users, asylum seekers, and patients with multiple co-morbidities. Mobile X-rays reduce patient stress, minimize costly hospitalizations, and offer cost-effective imaging. While studies confirm its reliability, further research is needed, especially regarding image quality. Recent advancements in lightweight equipment with enhanced battery and detector technology provide the potential for nearly handheld radiography. The main aim of this study was to evaluate a new lightweight mobile X-ray system with two different detectors and compare the image quality with a modern stationary system. Methods: A total of 74 images of the chest (chest anterior-posterior (AP) views and chest lateral views) and pelvic/hip region (AP pelvis views, hip AP views, and hip cross-table lateral views) were acquired on a whole-body phantom (Kyotokagaku, Japan), utilizing varying image parameters. These images were obtained using a stationary system - 18 images (Mediel, Sweden), a mobile X-ray system with a second-generation detector - 28 images (FDR D-EVO II; Fujifilm, Japan) and a mobile X-ray system with a third-generation detector - 28 images (FDR D-EVO III; Fujifilm, Japan). Image quality was assessed by visual grading analysis (VGA), which is a method to measure image quality by assessing the visibility and accurate reproduction of anatomical structures within the images. A total of 33 image criteria were used in the analysis. A panel of two experienced radiologists, two experienced radiographers, and two final-term radiographer students evaluated the image quality on a 5-grade ordinal scale using the software Viewdex 3.0 (Viewer for Digital Evaluation of X-ray images, Sweden). Data were analyzed using visual grading characteristics analysis. The dose was measured by the dose-area product (DAP) reported by the respective systems. Results: The mobile X-ray equipment (both detectors) showed significantly better image quality than the stationary equipment for the pelvis, hip AP and hip cross-table lateral images with AUCVGA-values ranging from 0.64-0.92, while chest images showed mixed results. The number of images rated as having sufficient quality for diagnostic use was significantly higher for mobile X-ray generation 2 and 3 compared with the stationary X-ray system. The DAP values were higher for the stationary compared to the mobile system. Conclusions: The new lightweight radiographic equipment had an image quality at least as good as a fixed system at a lower radiation dose. Future studies should focus on clinical images and consider radiographers' viewpoints for a comprehensive assessment.

Keywords: mobile x-ray, visual grading analysis, radiographer, radiation dose

Procedia PDF Downloads 37
170 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 236
169 Urban Dynamics Modelling of Mixed Land Use for Sustainable Urban Development in Indian Context

Authors: Rewati Raman, Uttam K. Roy

Abstract:

One of the main adversaries of city planning in present times is the ever-expanding problem of urbanization and the antagonistic issues accompanying it. The prevalent challenges in urbanization such as population growth, urban sprawl, poverty, inequality, pollution, congestion, etc. call for reforms in the urban fabric as well as in planning theory and practice. One of the various paradigms of city planning, land use planning, has been the major instruments for spatial planning of cities and regions in India. Zoning regulation based land use planning in the form of land use and development control plans (LUDCP) and development control regulations (DCR) have been considered mainstream guiding principles in land use planning for decades. In spite of many advantages of such zoning based regulations, over a period of time, it has been critiqued by scholars for its own limitations of isolation and lack of vitality, inconvenience in business in terms of proximity to residence and low operating cost, unsuitable environment for small investments, higher travel distance for facilities, amenities and thereby higher expenditure, safety issues etc. Mixed land use has been advocated as a tool to avoid such limitations in city planning by researchers. In addition, mixed land use can offer many advantages like housing variety and density, the creation of an economic blend of compatible land use, compact development, stronger neighborhood character, walkability, and generation of jobs, etc. Alternatively, the mixed land use beyond a suitable balance of use can also bring disadvantages like traffic congestion, encroachments, very high-density housing leading to a slum like condition, parking spill out, non-residential uses operating on residential premises paying less tax, chaos hampering residential privacy, pressure on existing infrastructure facilities, etc. This research aims at studying and outlining the various challenges and potentials of mixed land use zoning, through modeling tools, as a competent instrument for city planning in lieu of the present urban scenario. The methodology of research adopted in this paper involves the study of a mixed land use neighborhood in India, identification of indicators and parameters related to its extent and spatial pattern and the subsequent use of system dynamics as a modeling tool for simulation. The findings from this analysis helped in identifying the various advantages and challenges associated with the dynamic nature of a mixed use urban settlement. The results also confirmed the hypothesis that mixed use neighborhoods are catalysts for employment generation, socioeconomic gains while improving vibrancy, health, safety, and security. It is also seen that certain challenges related to chaos, lack of privacy and pollution prevail in mixed use neighborhoods, which can be mitigated by varying the percentage of mixing as per need, ensuring compatibility of adjoining use, institutional interventions in the form of policies, neighborhood micro-climatic interventions, etc. Therefore this paper gives a consolidated and holistic framework and quantified outcome pertaining to the extent and spatial pattern of mixed land use that should be adopted to ensure sustainable urban planning.

Keywords: mixed land use, sustainable development, system dynamics analysis, urban dynamics modelling

Procedia PDF Downloads 153
168 Rapid, Direct, Real-Time Method for Bacteria Detection on Surfaces

Authors: Evgenia Iakovleva, Juha Koivisto, Pasi Karppinen, J. Inkinen, Mikko Alava

Abstract:

Preventing the spread of infectious diseases throughout the worldwide is one of the most important tasks of modern health care. Infectious diseases not only account for one fifth of the deaths in the world, but also cause many pathological complications for the human health. Touch surfaces pose an important vector for the spread of infections by varying microorganisms, including antimicrobial resistant organisms. Further, antimicrobial resistance is reply of bacteria to the overused or inappropriate used of antibiotics everywhere. The biggest challenges in bacterial detection by existing methods are non-direct determination, long time of analysis, the sample preparation, use of chemicals and expensive equipment, and availability of qualified specialists. Therefore, a high-performance, rapid, real-time detection is demanded in rapid practical bacterial detection and to control the epidemiological hazard. Among the known methods for determining bacteria on the surfaces, Hyperspectral methods can be used as direct and rapid methods for microorganism detection on different kind of surfaces based on fluorescence without sampling, sample preparation and chemicals. The aim of this study was to assess the relevance of such systems to remote sensing of surfaces for microorganisms detection to prevent a global spread of infectious diseases. Bacillus subtilis and Escherichia coli with different concentrations (from 0 to 10x8 cell/100µL) were detected with hyperspectral camera using different filters as visible visualization of bacteria and background spots on the steel plate. A method of internal standards was applied for monitoring the correctness of the analysis results. Distances from sample to hyperspectral camera and light source are 25 cm and 40 cm, respectively. Each sample is optically imaged from the surface by hyperspectral imaging system, utilizing a JAI CM-140GE-UV camera. Light source is BeamZ FLATPAR DMX Tri-light, 3W tri-colour LEDs (red, blue and green). Light colors are changed through DMX USB Pro interface. The developed system was calibrated following a standard procedure of setting exposure and focused for light with λ=525 nm. The filter is ThorLabs KuriousTM hyperspectral filter controller with wavelengths from 420 to 720 nm. All data collection, pro-processing and multivariate analysis was performed using LabVIEW and Python software. The studied human eye visible and invisible bacterial stains clustered apart from a reference steel material by clustering analysis using different light sources and filter wavelengths. The calculation of random and systematic errors of the analysis results proved the applicability of the method in real conditions. Validation experiments have been carried out with photometry and ATP swab-test. The lower detection limit of developed method is several orders of magnitude lower than for both validation methods. All parameters of the experiments were the same, except for the light. Hyperspectral imaging method allows to separate not only bacteria and surfaces, but also different types of bacteria, such as Gram-negative Escherichia coli and Gram-positive Bacillus subtilis. Developed method allows skipping the sample preparation and the use of chemicals, unlike all other microbiological methods. The time of analysis with novel hyperspectral system is a few seconds, which is innovative in the field of microbiological tests.

Keywords: Escherichia coli, Bacillus subtilis, hyperspectral imaging, microorganisms detection

Procedia PDF Downloads 188
167 Comparative Vector Susceptibility for Dengue Virus and Their Co-Infection in A. aegypti and A. albopictus

Authors: Monika Soni, Chandra Bhattacharya, Siraj Ahmed Ahmed, Prafulla Dutta

Abstract:

Dengue is now a globally important arboviral disease. Extensive vector surveillance has already established A.aegypti as a primary vector, but A.albopictus is now accelerating the situation through gradual adaptation to human surroundings. Global destabilization and gradual climatic shift with rising in temperature have significantly expanded the geographic range of these species These versatile vectors also host Chikungunya, Zika, and yellow fever virus. Biggest challenge faced by endemic countries now is upsurge in co-infection reported with multiple serotypes and virus co-circulation. To foster vector control interventions and mitigate disease burden, there is surge for knowledge on vector susceptibility and viral tolerance in response to multiple infections. To address our understanding on transmission dynamics and reproductive fitness, both the vectors were exposed to single and dual combinations of all four dengue serotypes by artificial feeding and followed up to third generation. Artificial feeding observed significant difference in feeding rate for both the species where A.albopictus was poor artificial feeder (35-50%) compared to A.aegypti (95-97%) Robust sequential screening of viral antigen in mosquitoes was followed by Dengue NS1 ELISA, RT-PCR and Quantitative PCR. To observe viral dissemination in different mosquito tissues Indirect immunofluorescence assay was performed. Result showed that both the vectors were infected initially with all dengue(1-4)serotypes and its co-infection (D1 and D2, D1 and D3, D1 and D4, D2 and D4) combinations. In case of DENV-2 there was significant difference in the peak titer observed at 16th day post infection. But when exposed to dual infections A.aegypti supported all combinations of virus where A.albopictus only continued single infections in successive days. There was a significant negative effect on the fecundity and fertility of both the vectors compared to control (PANOVA < 0.001). In case of dengue 2 infected mosquito, fecundity in parent generation was significantly higher (PBonferroni < 0.001) for A.albopicus compare to A.aegypti but there was a complete loss of fecundity from second to third generation for A.albopictus. It was observed that A.aegypti becomes infected with multiple serotypes frequently even at low viral titres compared to A.albopictus. Possible reason for this could be the presence of wolbachia infection in A.albopictus or mosquito innate immune response, small RNA interference etc. Based on the observations it could be anticipated that transovarial transmission may not be an important phenomenon for clinical disease outcome, due to the absence of viral positivity by third generation. Also, Dengue NS1 ELISA can be used for preliminary viral detection in mosquitoes as more than 90% of the samples were found positive compared to RT-PCR and viral load estimation.

Keywords: co-infection, dengue, reproductive fitness, viral quantification

Procedia PDF Downloads 178
166 Developing and Testing a Questionnaire of Music Memorization and Practice

Authors: Diana Santiago, Tania Lisboa, Sophie Lee, Alexander P. Demos, Monica C. S. Vasconcelos

Abstract:

Memorization has long been recognized as an arduous and anxiety-evoking task for musicians, and yet, it is an essential aspect of performance. Research shows that musicians are often not taught how to memorize. While memorization and practice strategies of professionals have been studied, little research has been done to examine how student musicians learn to practice and memorize music in different cultural settings. We present the process of developing and testing a questionnaire of music memorization and musical practice for student musicians in the UK and Brazil. A survey was developed for a cross-cultural research project aiming at examining how young orchestral musicians (aged 7–18 years) in different learning environments and cultures engage in instrumental practice and memorization. The questionnaire development included members of a UK/US/Brazil research team of music educators and performance science researchers. A pool of items was developed for each aspect of practice and memorization identified, based on literature, personal experiences, and adapted from existing questionnaires. Item development took the varying levels of cognitive and social development of the target populations into consideration. It also considered the diverse target learning environments. Items were initially grouped in accordance with a single underlying construct/behavior. The questionnaire comprised three sections: a demographics section, a section on practice (containing 29 items), and a section on memorization (containing 40 items). Next, the response process was considered and a 5-point Likert scale ranging from ‘always’ to ‘never’ with a verbal label and an image assigned to each response option was selected, following effective questionnaire design for children and youths. Finally, a pilot study was conducted with young orchestral musicians from diverse learning environments in Brazil and the United Kingdom. Data collection took place in either one-to-one or group settings to facilitate the participants. Cognitive interviews were utilized to establish response process validity by confirming the readability and accurate comprehension of the questionnaire items or highlighting the need for item revision. Internal reliability was investigated by measuring the consistency of the item groups using the statistical test Cronbach’s alpha. The pilot study successfully relied on the questionnaire to generate data about the engagement of young musicians of different levels and instruments, across different learning and cultural environments, in instrumental practice and memorization. Interaction analysis of the cognitive interviews undertaken with these participants, however, exposed the fact that certain items, and the response scale, could be interpreted in multiple ways. The questionnaire text was, therefore, revised accordingly. The low Cronbach’s Alpha scores of many item groups indicated another issue with the original questionnaire: its low level of internal reliability. Several reasons for each poor reliability can be suggested, including the issues with item interpretation revealed through interaction analysis of the cognitive interviews, the small number of participants (34), and the elusive nature of the construct in question. The revised questionnaire measures 78 specific behaviors or opinions. It can be seen to provide an efficient means of gathering information about the engagement of young musicians in practice and memorization on a large scale.

Keywords: cross-cultural, memorization, practice, questionnaire, young musicians

Procedia PDF Downloads 103
165 Predicting Career Adaptability and Optimism among University Students in Turkey: The Role of Personal Growth Initiative and Socio-Demographic Variables

Authors: Yagmur Soylu, Emir Ozeren, Erol Esen, Digdem M. Siyez, Ozlem Belkis, Ezgi Burc, Gülce Demirgurz

Abstract:

The aim of the study is to determine the predictive power of personal growth initiative, socio-demographic variables (such as sex, grade, and working condition) on career adaptability and optimism of bachelor students in Dokuz Eylul University in Turkey. According to career construction theory, career adaptability is viewed as a psychosocial construct, which refers to an individual’s resources for dealing with current and expected tasks, transitions and traumas in their occupational roles. Career optimism is defined as positive results for future career development of individuals in the expectation that it will achieve or to put the emphasis on the positive aspects of the event and feel comfortable about the career planning process. Personal Growth Initiative (PGI) is defined as being proactive about one’s personal development. Additionally, personal growth is defined as the active and intentional engagement in the process of personal. A study conducted on college students revealed that individuals with high self-development orientation make more effort to discover the requirements of the profession and workspaces than individuals with low levels of personal development orientation. University life is a period that social relations and the importance of academic activities are increased, the students make efforts to progress through their career paths and it is also an environment that offers opportunities to students for their self-realization. For these reasons, personal growth initiative is potentially an important variable which has a key role for an individual during the transition phase from university to the working life. Based on the review of the literature, it is expected that individual’s personal growth initiative, sex, grade, and working condition would significantly predict one’s career adaptability. In the relevant literature, it can be seen that there are relatively few studies available on the career adaptability and optimism of university students. Most of the existing studies have been carried out with limited respondents. In this study, the authors aim to conduct a comprehensive research with a large representative sample of bachelor students in Dokuz Eylul University, Izmir, Turkey. By now, personal growth initiative and career development constructs have been predominantly discussed in western contexts where individualistic tendencies are likely to be seen. Thus, the examination of the same relationship within the context of Turkey where collectivistic cultural characteristics can be more observed is expected to offer valuable insights and provide an important contribution to the literature. The participants in this study were comprised of 1500 undergraduate students being included from thirteen faculties in Dokuz Eylul University. Stratified and random sampling methods were adopted for the selection of the participants. The Personal Growth Initiative Scale-II and Career Futures Inventory were used as the major measurement tools. In data analysis stage, several statistical analysis concerning the regression analysis, one-way ANOVA and t-test will be conducted to reveal the relationships of the constructs under investigation. At the end of this project, we will be able to determine the level of career adaptability and optimism of university students at varying degrees so that a fertile ground is likely to be created to carry out several intervention techniques to make a contribution to an emergence of a healthier and more productive youth generation in psycho-social sense.

Keywords: career optimism, career adaptability, personal growth initiative, university students

Procedia PDF Downloads 389
164 Improvement of Activity of β-galactosidase from Kluyveromyces lactis via Immobilization on Polyethylenimine-Chitosan

Authors: Carlos A. C. G. Neto, Natan C. G. e Silva , Thaís de O. Costa, Luciana R. B. Gonçalves, Maria V. P. Rocha

Abstract:

β-galactosidases (E.C. 3.2.1.23) are enzymes that have attracted by catalyzing the hydrolysis of lactose and in producing galacto-oligosaccharides by favoring transgalactosylation reactions. These enzymes, when immobilized, can have some enzymatic characteristics substantially improved, and the coating of supports with multifunctional polymers is a promising alternative to enhance the stability of the biocatalysts, among which polyethylenimine (PEI) stands out. PEI has certain properties, such as being a flexible polymer that suits the structure of the enzyme, giving greater stability, especially for multimeric enzymes such as β-galactosidases. Besides that, protects them from environmental variations. The use of chitosan support coated with PEI could improve the catalytic efficiency of β-galactosidase from Kluyveromyces lactis in the transgalactosylation reaction for the production of prebiotics, such as lactulose since this strain is more effective in the hydrolysis reaction. In this context, the aim of the present work was first to develop biocatalysts of β-galactosidase from K. lactis immobilized on chitosan-coated with PEI, determining the immobilization parameters, its operational and thermal stability, and then to apply it in hydrolysis and transgalactolisation reactions to produce lactulose using whey as a substrate. The immobilization of β-galactosidase in chitosan previously functionalized with 0.8% (v/v) glutaraldehyde and then coated with 10% (w/v) PEI solution was evaluated using an enzymatic load of 10 mg protein per gram support. Subsequently, the hydrolysis and transgalactosylation reactions were conducted at 50 °C, 120 RPM for 20 minutes, using whey supplemented with fructose at a ratio of 1:2 lactose/fructose, totaling 200 g/L. Operational stability studies were performed in the same conditions for 10 cycles. Thermal stabilities of biocatalysts were conducted at 50 ºC in 50 mM phosphate buffer, pH 6.6 with 0.1 mM MnCl2. The biocatalyst whose support was coated was named CHI_GLU_PEI_GAL, and the one that was not coated was named CHI_GLU_GAL. The coating of the support with PEI considerably improved the parameters of immobilization. The immobilization yield increased from 56.53% to 97.45%, biocatalyst activity from 38.93 U/g to 95.26 U/g and the efficiency from 3.51% to 6.0% for uncoated and coated support, respectively. The biocatalyst CHI_GLU_PEI_GAL was better than CHI_GLU_GAL in the hydrolysis of lactose and production of lactulose, converting 97.05% of lactose at 5 min of reaction and producing 7.60 g/L lactulose in the same time interval. QUI_GLU_PEI_GAL biocatalyst was stable in the hydrolysis reactions of lactose during the 10 cycles evaluated, converting 73.45% lactose even after the tenth cycle, and in the lactulose production was stable until the fifth cycle evaluated, producing 10.95 g/L lactulose. However, the thermal stability of CHI_GLU_GAL biocatalyst was superior, with a half-life time 6 times higher, probably because the enzyme was immobilized by covalent bonding, which is stronger than adsorption (CHI_GLU_PEI_GAL). Therefore, the strategy of coating the supports with PEI has proven to be effective for the immobilization of β-galactosidase from K. lactis, considerably improving the immobilization parameters, as well as, the catalytic action of the enzyme. Besides that, this process can be economically viable due to the use of an industrial residue as a substrate.

Keywords: β-galactosidase, immobilization, kluyveromyces lactis, lactulose, polyethylenimine, transgalactosylation reaction, whey

Procedia PDF Downloads 90
163 Becoming a Good-Enough White Therapist: Experiences of International Students in Psychology Doctoral Programs

Authors: Mary T. McKinley

Abstract:

As socio-economic globalization impacts education and turns knowledge into a commodity, institutions of higher education are becoming more intentional about infusing a global and intercultural perspective into education via the recruitment of international students. Coming from dissimilar cultures, many of these students are evaluated and held accountable to Euro-American values of independence, self-reliance, and autonomy. Not surprisingly, these students often experience culture shock with deleterious effects on their mental health and academic functioning. Thus, it is critical to understand the experiences of international students with the hope that such knowledge will keep the field of psychology from promulgating Eurocentric ideals and values and prevent the training of these students as good-enough White therapists. Using a critical narrative inquiry framework, this study elicits stories about the challenges encountered by international students as they navigate their clinical training in the presence of acculturative stress and potentially different worldviews. With its emphasis on story-telling as meaning making, narrative research design is hinged on the assumption that people are interpretive beings who make meaning of themselves and their world through the language of stories. Also, dominant socially-constructed narratives play a central role in creating and maintaining hegemonic structures that privilege certain individuals and ideologies at the expense of others. On this premise, narrative inquiry begins with an exploration of the experiences of participants in their lived stories. Bounded narrative segments were read, interpreted, and analyzed using a critical events approach. Throughout the process, issues of reliability and researcher bias were addressed by keeping a reflective analytic memo, as well as triangulating the data using peer-reviewers and check-ins with participants. The findings situate culture at the epicenter of international students’ acculturation challenges as well as their resiliency in psychology doctoral programs. It was not uncommon for these international students to experience ethical dilemmas inherent in learning content that conflicted with their cultural beliefs and values. Issues of cultural incongruence appear to be further exacerbated by visible markers for differences like speech accent and clothing attire. These stories also link the acculturative stress reported by international students to the experiences of perceived racial discrimination and lack of support from the faculty, administration, peers, and the society at large. Beyond the impact on the international students themselves, there are implications for internationalization in psychology with the goal of equipping doctoral programs to be better prepared to meet the needs of their international students. More than ever before, programs need to liaise with international students’ services and work in tandem to meet the unique needs of this population of students. Also, there exists a need for multiculturally competent supervisors working with international students with varying degrees of acculturation. In addition to making social justice and advocacy salient in students’ multicultural training, it may be helpful for psychology doctoral programs to be more intentional about infusing cross-cultural theories, indigenous psychotherapies, and/or when practical, the possibility for geographically cross-cultural practicum experiences in the home countries of international students while taking into consideration the ethical issues for virtual supervision.

Keywords: decolonizing pedagogies, international students, multiculturalism, psychology doctoral programs

Procedia PDF Downloads 94
162 Monitoring and Evaluation of Web-Services Quality and Medium-Term Impact on E-Government Agencies' Efficiency

Authors: A. F. Huseynov, N. T. Mardanov, J. Y. Nakhchivanski

Abstract:

This practical research is aimed to improve the management quality and efficiency of public administration agencies providing e-services. The monitoring system developed will provide continuous review of the websites compliance with the selected indicators, their evaluation based on the selected indicators and ranking of services according to the quality criteria. The responsible departments in the government agencies were surveyed; the questionnaire includes issues of management and feedback, e-services provided, and the application of information systems. By analyzing the main affecting factors and barriers, the recommendations will be given that lead to the relevant decisions to strengthen the state agencies competencies for the management and the provision of their services. Component 1. E-services monitoring system. Three separate monitoring activities are proposed to be executed in parallel: Continuous tracing of e-government sites using built-in web-monitoring program; this program generates several quantitative values which are basically related to the technical characteristics and the performance of websites. The expert assessment of e-government sites in accordance with the two general criteria. Criterion 1. Technical quality of the site. Criterion 2. Usability/accessibility (load, see, use). Each high-level criterion is in turn subdivided into several sub-criteria, such as: the fonts and the color of the background (Is it readable?), W3C coding standards, availability of the Robots.txt and the site map, the search engine, the feedback/contact and the security mechanisms. The on-line survey of the users/citizens – a small group of questions embedded in the e-service websites. The questionnaires comprise of the information concerning navigation, users’ experience with the website (whether it was positive or negative), etc. Automated monitoring of web-sites by its own could not capture the whole evaluation process, and should therefore be seen as a complement to expert’s manual web evaluations. All of the separate results were integrated to provide the complete evaluation picture. Component 2. Assessment of the agencies/departments efficiency in providing e-government services. - the relevant indicators to evaluate the efficiency and the effectiveness of e-services were identified; - the survey was conducted in all the governmental organizations (ministries, committees and agencies) that provide electronic services for the citizens or the businesses; - the quantitative and qualitative measures are covering the following sections of activities: e-governance, e-services, the feedback from the users, the information systems at the agencies’ disposal. Main results: 1. The software program and the set of indicators for internet sites evaluation has been developed and the results of pilot monitoring have been presented. 2. The evaluation of the (internal) efficiency of the e-government agencies based on the survey results with the practical recommendations related to the human potential, the information systems used and e-services provided.

Keywords: e-government, web-sites monitoring, survey, internal efficiency

Procedia PDF Downloads 275
161 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 173
160 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance

Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta

Abstract:

Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.

Keywords: glass plates, human impact test, modal test, plate boundary conditions

Procedia PDF Downloads 283